Science.gov

Sample records for facility distributed computer

  1. DNET: A communications facility for distributed heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Tole, John; Nagappan, S.; Clayton, J.; Ruotolo, P.; Williamson, C.; Solow, H.

    1989-01-01

    This document describes DNET, a heterogeneous data communications networking facility. DNET allows programs operating on hosts on dissimilar networks to communicate with one another without concern for computer hardware, network protocol, or operating system differences. The overall DNET network is defined as the collection of host machines/networks on which the DNET software is operating. Each underlying network is considered a DNET 'domain'. Data communications service is provided between any two processes on any two hosts on any of the networks (domains) that may be reached via DNET. DNET provides protocol transparent, reliable, streaming data transmission between hosts (restricted, initially to DECnet and TCP/IP networks). DNET also provides variable length datagram service with optional return receipts.

  2. The Overview of the National Ignition Facility Distributed Computer Control System

    SciTech Connect

    Lagin, L J; Bettenhausen, R C; Carey, R A; Estes, C M; Fisher, J M; Krammen, J E; Reed, R K; VanArsdall, P J; Woodruff, J P

    2001-10-15

    The Integrated Computer Control System (ICCS) for the National Ignition Facility (NIF) is a layered architecture of 300 front-end processors (FEP) coordinated by supervisor subsystems including automatic beam alignment and wavefront control, laser and target diagnostics, pulse power, and shot control timed to 30 ps. FEP computers incorporate either VxWorks on PowerPC or Solaris on UltraSPARC processors that interface to over 45,000 control points attached to VME-bus or PCI-bus crates respectively. Typical devices are stepping motors, transient digitizers, calorimeters, and photodiodes. The front-end layer is divided into another segment comprised of an additional 14,000 control points for industrial controls including vacuum, argon, synthetic air, and safety interlocks implemented with Allen-Bradley programmable logic controllers (PLCs). The computer network is augmented asynchronous transfer mode (ATM) that delivers video streams from 500 sensor cameras monitoring the 192 laser beams to operator workstations. Software is based on an object-oriented framework using CORBA distribution that incorporates services for archiving, machine configuration, graphical user interface, monitoring, event logging, scripting, alert management, and access control. Software coding using a mixed language environment of Ada95 and Java is one-third complete at over 300 thousand source lines. Control system installation is currently under way for the first 8 beams, with project completion scheduled for 2008.

  3. Computer security in DOE distributed computing systems

    SciTech Connect

    Hunteman, W.J.

    1990-01-01

    The modernization of DOE facilities amid limited funding is creating pressure on DOE facilities to find innovative approaches to their daily activities. Distributed computing systems are becoming cost-effective solutions to improved productivity. This paper defines and describes typical distributed computing systems in the DOE. The special computer security problems present in distributed computing systems are identified and compared with traditional computer systems. The existing DOE computer security policy supports only basic networks and traditional computer systems and does not address distributed computing systems. A review of the existing policy requirements is followed by an analysis of the policy as it applies to distributed computing systems. Suggested changes in the DOE computer security policy are identified and discussed. The long lead time in updating DOE policy will require guidelines for applying the existing policy to distributed systems. Some possible interim approaches are identified and discussed. 2 refs.

  4. Distributed computer control system in the Nova Laser Fusion Test Facility

    SciTech Connect

    Not Available

    1985-09-01

    The EE Technical Review has two purposes - to inform readers of various activities within the Electronics Engineering Department and to promote the exchange of ideas. The articles, by design, are brief summaries of EE work. The articles included in this report are as follows: Overview - Nova Control System; Centralized Computer-Based Controls for the Nova Laser Facility; Nova Pulse-Power Control System; Nova Laser Alignment Control System; Nova Beam Diagnostic System; Nova Target-Diagnostics Control System; and Nova Shot Scheduler. The 7 papers are individually abstracted.

  5. Distributed computing

    SciTech Connect

    Chambers, F.B.; Duce, D.A.; Jones, G.P.

    1984-01-01

    CONTENTS: The Dataflow Approach: Fundamentals of dataflow. Architecture and performance. Assembler level programming. High level dataflow programming. Declarative systems: Functional programming. Logic programming and prolog. The ''language first'' approach. Towards a successor to von Neumann. Loosely-coupled systems: Architectures. Communications. Distributed filestores. Mechanisms for distributed control. Distributed operating systems. Programming languages. Closely-coupled systems: Architecture. Programming languages. Run-time support. Development aids. Cyba-M. Polyproc. Modeling and verification: Using algebra for concurrency. Reasoning about concurrent systems. Each chapter includes references. Index.

  6. Redirecting Under-Utilised Computer Laboratories into Cluster Computing Facilities

    ERIC Educational Resources Information Center

    Atkinson, John S.; Spenneman, Dirk H. R.; Cornforth, David

    2005-01-01

    Purpose: To provide administrators at an Australian university with data on the feasibility of redirecting under-utilised computer laboratories facilities into a distributed high performance computing facility. Design/methodology/approach: The individual log-in records for each computer located in the computer laboratories at the university were…

  7. Physics Division computer facilities

    SciTech Connect

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    The Physics Division maintains several computer systems for data analysis, general-purpose computing, and word processing. While the VMS VAX clusters are still used, this past year saw a greater shift to the Unix Cluster with the addition of more RISC-based Unix workstations. The main Divisional VAX cluster which consists of two VAX 3300s configured as a dual-host system serves as boot nodes and disk servers to seven other satellite nodes consisting of two VAXstation 3200s, three VAXstation 3100 machines, a VAX-11/750, and a MicroVAX II. There are three 6250/1600 bpi 9-track tape drives, six 8-mm tapes and about 9.1 GB of disk storage served to the cluster by the various satellites. Also, two of the satellites (the MicroVAX and VAX-11/750) have DAPHNE front-end interfaces for data acquisition. Since the tape drives are accessible cluster-wide via a software package, they are, in addition to replay, used for tape-to-tape copies. There is however, a satellite node outfitted with two 8 mm drives available for this purpose. Although not part of the main cluster, a DEC 3000 Alpha machine obtained for data acquisition is also available for data replay. In one case, users reported a performance increase by a factor of 10 when using this machine.

  8. AMRITA -- A computational facility

    SciTech Connect

    Shepherd, J.E.; Quirk, J.J.

    1998-02-23

    Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

  9. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  10. Distributed instruction set computer

    SciTech Connect

    Wang, L.

    1989-01-01

    The Distributed Instruction Set Computer, or DISC for short, is an experimental computer system for fine-grained parallel processing. DISC employs a new parallel instruction set, an Early Binding and Scheduling data tagging scheme, and a distributed control mechanism to explore a software dataflow control method in a multiple-functional unit system. With zero system control overhead, multiple instructions are executed in parallel and/or out of order at the highest speed of n instructions/cycle, where n is the number of functional units. The quantitative simulation result indicates that a DISC system with 16 functional units can deliverer a maximal 7.7X performance speedup over a single functional-unit system at the same clock speed. Exploring a new parallel instruction set and distributed control mechanism, DISC represents three major breakthroughs in the domain of fine-grained parallel processing: (1) Fast multiple instruction issuing mechanism; (2) Parallel and/or out-of-order execution; (3) Software dataflow control scheme.

  11. 2015 Annual Report - Argonne Leadership Computing Facility

    SciTech Connect

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.; Coffey, Richard M.

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  12. 2014 Annual Report - Argonne Leadership Computing Facility

    SciTech Connect

    Collins, James R.; Papka, Michael E.; Cerny, Beth A.; Coffey, Richard M.

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  13. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  14. Distributed Real-Time Computing with Harness

    SciTech Connect

    Di Saverio, Emanuele; Cesati, Marco; Di Biagio, Christian; Pennella, Guido; Engelmann, Christian

    2007-01-01

    Modern parallel and distributed computing solutions are often built onto a ''middleware'' software layer providing a higher and common level of service between computational nodes. Harness is an adaptable, plugin-based middleware framework for parallel and distributed computing. This paper reports recent research and development results of using Harness for real-time distributed computing applications in the context of an industrial environment with the needs to perform several safety critical tasks. The presented work exploits the modular architecture of Harness in conjunction with a lightweight threaded implementation to resolve several real-time issues by adding three new Harness plug-ins to provide a prioritized lightweight execution environment, low latency communication facilities, and local timestamped event logging.

  15. Distributed Computing Framework for Synthetic Radar Application

    NASA Technical Reports Server (NTRS)

    Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael

    2006-01-01

    We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

  16. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  17. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  18. BESIII production with distributed computing

    NASA Astrophysics Data System (ADS)

    Zhang, X. M.; Yan, T.; Zhao, X. H.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Distributed computing is necessary nowadays for high energy physics experiments to organize heterogeneous computing resources all over the world to process enormous amounts of data. The BESIII experiment in China, has established its own distributed computing system, based on DIRAC, as a supplement to local clusters, collecting cluster, grid, desktop and cloud resources from collaborating member institutes around the world. The system consists of workload management and data management to deal with the BESIII Monte Carlo production workflow in a distributed environment. A dataset-based data transfer system has been developed to support data movements among sites. File and metadata management tools and a job submission frontend have been developed to provide a virtual layer for BESIII physicists to use distributed resources. Moreover, the paper shows the experience to cope with lack of grid experience and low manpower among the BESIII community.

  19. The Fermilab Central Computing Facility architectural model

    SciTech Connect

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.

  20. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  1. Computer Uses in School Facility Management.

    ERIC Educational Resources Information Center

    Vornberg, James A.

    Educational institutions and larger public school districts have implemented computerized systems of planning and management functions. The application of computers to facility management roughly may be divided into two general areas: (1) planning efforts of administrators and designers through methods of simulation, and (2) systems management…

  2. National Directory of Rehabilitation Facilities Using Computers.

    ERIC Educational Resources Information Center

    McCray, Paul M.; Blakemore, Thomas F.

    This directory represents the culmination of a national research project designed to assess the extent to which computer technology is being integrated into rehabilitation facility operations. The directory is divided into six major sections. The first section is a research summary that provides a concise description of how the information…

  3. Hydronic distribution system computer model

    SciTech Connect

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  4. Overlapping clusters for distributed computation.

    SciTech Connect

    Mirrokni, Vahab; Andersen, Reid; Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.

  5. Oak Ridge Leadership Computing Facility Position Paper

    SciTech Connect

    Oral, H Sarp; Hill, Jason J; Thach, Kevin G; Podhorszki, Norbert; Klasky, Scott A; Rogers, James H; Shipman, Galen M

    2011-01-01

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  6. Cooperative Fault Tolerant Distributed Computing

    SciTech Connect

    Fagg, Graham E.

    2006-03-15

    HARNESS was proposed as a system that combined the best of emerging technologies found in current distributed computing research and commercial products into a very flexible, dynamically adaptable framework that could be used by applications to allow them to evolve and better handle their execution environment. The HARNESS system was designed using the considerable experience from previous projects such as PVM, MPI, IceT and Cumulvs. As such, the system was designed to avoid any of the common problems found with using these current systems, such as no single point of failure, ability to survive machine, node and software failures. Additional features included improved inter-component connectivity, with full support for dynamic down loading of addition components at run-time thus reducing the stress on application developers to build in all the libraries they need in advance.

  7. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  8. Computer modeling of commercial refrigerated warehouse facilities

    SciTech Connect

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-07-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented.

  9. Particle Size Distribution in Aluminum Manufacturing Facilities

    PubMed Central

    Liu, Sa; Noth, Elizabeth M.; Dixon-Ernst, Christine; Eisen, Ellen A.; Cullen, Mark R.; Hammond, S. Katharine

    2015-01-01

    As part of exposure assessment for an ongoing epidemiologic study of heart disease and fine particle exposures in aluminum industry, area particle samples were collected in production facilities to assess instrument reliability and particle size distribution at different process areas. Personal modular impactors (PMI) and Minimicro-orifice uniform deposition impactors (MiniMOUDI) were used. The coefficient of variation (CV) of co-located samples was used to evaluate the reproducibility of the samplers. PM2.5 measured by PMI was compared to PM2.5 calculated from MiniMOUDI data. Mass median aerodynamic diameter (MMAD) and concentrations of sub-micrometer (PM1.0) and quasi-ultrafine (PM0.56) particles were evaluated to characterize particle size distribution. Most of CVs were less than 30%. The slope of the linear regression of PMI_PM2.5 versus MiniMOUDI_PM2.5 was 1.03 mg/m3 per mg/m3 (± 0.05), with correlation coefficient of 0.97 (± 0.01). Particle size distribution varied substantively in smelters, whereas it was less variable in fabrication units with significantly smaller MMADs (arithmetic mean of MMADs: 2.59 μm in smelters vs. 1.31 μm in fabrication units, p = 0.001). Although the total particle concentration was more than two times higher in the smelters than in the fabrication units, the fraction of PM10 which was PM1.0 or PM0.56 was significantly lower in the smelters than in the fabrication units (p < 0.001). Consequently, the concentrations of sub-micrometer and quasi-ultrafine particles were similar in these two types of facilities. It would appear, studies evaluating ultrafine particle exposure in aluminum industry should focus on not only the smelters, but also the fabrication facilities. PMID:26478760

  10. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  11. National Ignition Facility integrated computer control system

    NASA Astrophysics Data System (ADS)

    Van Arsdall, Paul J.; Bettenhausen, R. C.; Holloway, Frederick W.; Saroyan, R. A.; Woodruff, J. P.

    1999-07-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control system. The framework provides an open, extensive architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. THe ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensor to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  12. National Ignition Facility integrated computer control system

    SciTech Connect

    Van Arsdall, P.J., LLNL

    1998-06-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  13. Computational analysis of irradiation facilities at the JSI TRIGA reactor.

    PubMed

    Snoj, Luka; Zerovnik, Gašper; Trkov, Andrej

    2012-03-01

    Characterization and optimization of irradiation facilities in a research reactor is important for optimal performance. Nowadays this is commonly done with advanced Monte Carlo neutron transport computer codes such as MCNP. However, the computational model in such calculations should be verified and validated with experiments. In the paper we describe the irradiation facilities at the JSI TRIGA reactor and demonstrate their computational characterization to support experimental campaigns by providing information on the characteristics of the irradiation facilities. PMID:22154389

  14. Distributed Computing at Belle II

    NASA Astrophysics Data System (ADS)

    Bansal, Vikas; Belle Collaboration, II

    2016-03-01

    The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50 ab-1 of e+e- collision data, about 50 times larger than the data set of the earlier Belle experiment. The computing requirements of Belle II are comparable to those of a RUN I high-pT LHC experiment. Computing will make full use of high speed networking and of the Computing Grids in North America, Asia and Europe. Results of an initial MC simulation campaign with 5 ab-1 equivalent luminosity will be described.

  15. Design Criteria for OSE-User Computer Facility-Upgrade

    SciTech Connect

    Beaver, C E

    1989-05-01

    This project provides for the upgrading of the 4th floor OSE User Computer Facility to house new computers for the Paperlesss Manufacturing initiative, to support a classified processing environment. This is intended to enhance Mound's manufacturing environment, while addressing several DOE strategic initiatives such as (CIM) Computer Integrated Manufacturing. By consolidating the Paperless Manufacturing Approach to the existing OSE User Computer Facility and to meet UCI needs to house classified processing a considerable reduction in Operating Cost should be achieved.

  16. Distributed computing and nuclear reactor analysis

    SciTech Connect

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-03-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.

  17. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  18. Next generation distributed computing for cancer research.

    PubMed

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  19. Next Generation Distributed Computing for Cancer Research

    PubMed Central

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  20. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC`s perspective was ``to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.`` This translated into evaluating how easy it was to port ELROS over CRI`s ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC`s side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI`s goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  1. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC's perspective was to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.'' This translated into evaluating how easy it was to port ELROS over CRI's ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC's side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI's goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  2. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy; Boudwin, Kathlyn J.; Hack, James J; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C; Hudson, Douglas L

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of

  3. A Software Rejuvenation Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  4. GRIMD: distributed computing for chemists and biologists

    PubMed Central

    Piotto, Stefano; Biasi, Luigi Di; Concilio, Simona; Castiglione, Aniello; Cattaneo, Giuseppe

    2014-01-01

    Motivation: Biologists and chemists are facing problems of high computational complexity that require the use of several computers organized in clusters or in specialized grids. Examples of such problems can be found in molecular dynamics (MD), in silico screening, and genome analysis. Grid Computing and Cloud Computing are becoming prevalent mainly because of their competitive performance/cost ratio. Regrettably, the diffusion of Grid Computing is strongly limited because two main limitations: it is confined to scientists with strong Computer Science background and the analyses of the large amount of data produced can be cumbersome it. We have developed a package named GRIMD to provide an easy and flexible implementation of distributed computing for the Bioinformatics community. GRIMD is very easy to install and maintain, and it does not require any specific Computer Science skill. Moreover, permits preliminary analysis on the distributed machines to reduce the amount of data to transfer. GRIMD is very flexible because it shields the typical computational biologist from the need to write specific code for tasks such as molecular dynamics or docking calculations. Furthermore, it permits an efficient use of GPU cards whenever is possible. GRIMD calculations scale almost linearly and, therefore, permits to exploit efficiently each machine in the network. Here, we provide few examples of grid computing in computational biology (MD and docking) and bioinformatics (proteome analysis). Availability GRIMD is available for free for noncommercial research at www.yadamp.unisa.it/grimd Supplementary information www.yadamp.unisa.it/grimd/howto.aspx PMID:24516326

  5. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next

  6. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis

    PubMed Central

    Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen

    2016-01-01

    The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities. PMID:27548197

  7. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis.

    PubMed

    Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen

    2016-01-01

    The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities. PMID:27548197

  8. Object-oriented Tools for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1993-01-01

    Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.

  9. National remote computational flight research facility

    NASA Technical Reports Server (NTRS)

    Rediess, Herman A.

    1989-01-01

    The extension of the NASA Ames-Dryden remotely augmented vehicle (RAV) facility to accommodate flight testing of a hypersonic aircraft utilizing the continental United States as a test range is investigated. The development and demonstration of an automated flight test management system (ATMS) that uses expert system technology for flight test planning, scheduling, and execution is documented.

  10. Solving the Quadratic Capacitated Facilities Location Problem by Computer.

    ERIC Educational Resources Information Center

    Cote, Leon C.; Smith, Wayland P.

    Several computer programs were developed to solve various versions of the quadratic capacitated facilities location problem. Matrices, which represent various business costs, are defined for the factors of sites, facilities, customers, commodities, and production units. The objective of the program is to find an optimization matrix for the lowest…

  11. Distribution of Corbicula fluminea at nuclear facilities

    SciTech Connect

    Counts, C.L. III

    1985-11-01

    A review of the zoogeographic records for the exotic Asian clam, Corbicula fluminea (Muller, 1774), reveals its presence in 27 states where nuclear powered electric generating plants are either operating or under construction. Nineteen plant sites reported infestation of varying severity in facilities, or source water bodies immediately adjacent to the facility, by C. fluminea. Thirteen plant sites are located within the zoogeographic limits of C. fluminea but have a low risk of infestation due to either salt water cooling systems or locations a great distance from known populations. Eighteen plant sites are located wholly outside of the known zoogeographic range of C. fluminea. Thirty plant sites are located in close proximity to known populations of C. fluminea and therefore should maintain surveillance of the source water body and within plant water systems for possible infestations by these bivalves. 27 figs.

  12. Review of Test Facilities for Distributed Energy Resources

    SciTech Connect

    AKHIL,ABBAS ALI; MARNAY,CHRIS; KIPMAN,TIMOTHY

    2003-05-01

    Since initiating research on integration of distributed energy resources (DER) in 1999, the Consortium for Electric Reliability Technology Solutions (CERTS) has been actively assessing and reviewing existing DER test facilities for possible demonstrations of advanced DER system integration concepts. This report is a compendium of information collected by the CERTS team on DER test facilities during this period.

  13. Configuration and Management of a Cluster Computing Facility in Undergraduate Student Computer Laboratories

    ERIC Educational Resources Information Center

    Cornforth, David; Atkinson, John; Spennemann, Dirk H. R.

    2006-01-01

    Purpose: Many researchers require access to computer facilities beyond those offered by desktop workstations. Traditionally, these are offered either through partnerships, to share the cost of supercomputing facilities, or through purpose-built cluster facilities. However, funds are not always available to satisfy either of these options, and…

  14. Biomedical computing facility interface design plan

    NASA Technical Reports Server (NTRS)

    Puckett, R. D.

    1971-01-01

    The results are presented of a design study performed to establish overall system interface requirements for the Biomedical Laboratories Division's Sigma-3 computer system. Emphasis has been placed upon the definition of an overall implementation plan and associated schedule to meet both near-term and long-range requirements within the constraints at available resources.

  15. High-performance computing and distributed systems

    SciTech Connect

    Loken, S.C.; Greiman, W.; Jacobson, V.L.; Johnston, W.E.; Robertson, D.W.; Tierney, B.L.

    1992-09-01

    We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into ``virtual systems`` that are optimal for the solution of a particular problem. We also describe present two pilot projects that illustrate some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.

  16. High-performance computing and distributed systems

    SciTech Connect

    Loken, S.C.; Greiman, W.; Jacobson, V.L.; Johnston, W.E.; Robertson, D.W.; Tierney, B.L.

    1992-09-01

    We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into virtual systems'' that are optimal for the solution of a particular problem. We also describe present two pilot projects that illustrate some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.

  17. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where

  18. Data Integration in Computer Distributed Systems

    NASA Astrophysics Data System (ADS)

    Kwiecień, Błażej

    In this article the author analyze a problem of data integration in a computer distributed systems. Exchange of information between different levels in integrated pyramid of enterprise process is fundamental with regard to efficient enterprise work. Communication and data exchange between levels are not always the same cause of necessity of different network protocols usage, communication medium, system response time, etc.

  19. Computer Systems for Distributed and Distance Learning.

    ERIC Educational Resources Information Center

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  20. Great Expectations: Distributed Financial Computing at Cornell.

    ERIC Educational Resources Information Center

    Schulden, Louise; Sidle, Clint

    1988-01-01

    The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)

  1. ATLAS Distributed Computing in LHC Run2

    NASA Astrophysics Data System (ADS)

    Campana, Simone

    2015-12-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented.

  2. Research computing in a distributed cloud environment

    NASA Astrophysics Data System (ADS)

    Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  3. Survey of computer codes applicable to waste facility performance evaluations

    SciTech Connect

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs.

  4. Distributed Storage Systems for Data Intensive Computing

    SciTech Connect

    Vazhkudai, Sudharshan S; Butt, Ali R; Ma, Xiaosong

    2012-01-01

    In this chapter, the authors present an overview of the utility of distributed storage systems in supporting modern applications that are increasingly becoming data intensive. Their coverage of distributed storage systems is based on the requirements imposed by data intensive computing and not a mere summary of storage systems. To this end, they delve into several aspects of supporting data-intensive analysis, such as data staging, offloading, checkpointing, and end-user access to terabytes of data, and illustrate the use of novel techniques and methodologies for realizing distributed storage systems therein. The data deluge from scientific experiments, observations, and simulations is affecting all of the aforementioned day-to-day operations in data-intensive computing. Modern distributed storage systems employ techniques that can help improve application performance, alleviate I/O bandwidth bottleneck, mask failures, and improve data availability. They present key guiding principles involved in the construction of such storage systems, associated tradeoffs, design, and architecture, all with an eye toward addressing challenges of data-intensive scientific applications. They highlight the concepts involved using several case studies of state-of-the-art storage systems that are currently available in the data-intensive computing landscape.

  5. Distributed computation of supremal conditionally controllable sublanguages

    NASA Astrophysics Data System (ADS)

    Komenda, Jan; Masopust, Tomáš

    2016-02-01

    In this paper, we further develop the coordination control framework for discrete-event systems with both complete and partial observations. First, a weaker sufficient condition for the computation of the supremal conditionally controllable sublanguage and conditionally normal sublanguage is presented. Then we show that this condition can be imposed by synthesising a-posteriori supervisors. The paper further generalises the previous study by considering general, non-prefix-closed languages. Moreover, we prove that for prefix-closed languages the supremal conditionally controllable sublanguage and conditionally normal sublanguage can always be computed in the distributed way without any restrictive conditions we have used in the past.

  6. Computationally intensive econometrics using a distributed matrix-programming language.

    PubMed

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling. PMID:12804277

  7. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  8. Subtlenoise: sonification of distributed computing operations

    NASA Astrophysics Data System (ADS)

    Love, P. A.

    2015-12-01

    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.

  9. Distributed Data Mining using a Public Resource Computing Framework

    NASA Astrophysics Data System (ADS)

    Cesario, Eugenio; de Caria, Nicola; Mastroianni, Carlo; Talia, Domenico

    The public resource computing paradigm is often used as a successful and low cost mechanism for the management of several classes of scientific and commercial applications that require the execution of a large number of independent tasks. Public computing frameworks, also known as “Desktop Grids”, exploit the computational power and storage facilities of private computers, or “workers”. Despite the inherent decentralized nature of the applications for which they are devoted, these systems often adopt a centralized mechanism for the assignment of jobs and distribution of input data, as is the case for BOINC, the most popular framework in this realm. We present a decentralized framework that aims at increasing the flexibility and robustness of public computing applications, thanks to two basic features: (i) the adoption of a P2P protocol for dynamically matching the job specifications with the worker characteristics, without relying on centralized resources; (ii) the use of distributed cache servers for an efficient dissemination and reutilization of data files. This framework is exploitable for a wide set of applications. In this work, we describe how a Java prototype of the framework was used to tackle the problem of mining frequent itemsets from a transactional dataset, and show some preliminary yet interesting performance results that prove the efficiency improvements that can derive from the presented architecture.

  10. Computer-Assisted School Facility Planning with ONPASS.

    ERIC Educational Resources Information Center

    Urban Decision Systems, Inc., Los Angeles, CA.

    The analytical capabilities of ONPASS, an on-line computer-aided school facility planning system, are described by its developers. This report describes how, using the Canoga Park-Winnetka-Woodland Hills Planning Area as a test case, the Department of City Planning of the city of Los Angeles employed ONPASS to demonstrate how an on-line system can…

  11. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  12. Pseudo-interactive monitoring in distributed computing

    SciTech Connect

    Sfiligoi, I.; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  13. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  14. Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.

  15. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  16. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  17. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    SciTech Connect

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  18. High Performance Computing Facilities for the Next Millennium

    SciTech Connect

    Kramer, William; Verdier, Francesca; Fitzgerald, Keith; Craw, James; Welcome, Tammy.

    1999-10-01

    High Performance Computing facilities face increased pressures to survive and thrive in the next millennium. HPC facilities must combine effective techniques of the past with innovative methods of the future. This tutorial explores requirements and pressures on HPC centers, and presents effective methods being employed and new approaches to employ to overcome these challenges. Topics include: The current state of HPC computing and projections; System management that allows MPPs running many large jobs to achieve greater than 90% utilization of CPUs; Archive storage issues of improving transfer bandwidth and practical advice for running Terabyte archives; Innovations for client services to ensure the ''intellectual resource'' is equally val2048by clients as the systems; Introduce the Effective System Performance Test a new way to objectively measure and compare not just system performance (e.g. sustained performance of applications) but also system effectiveness (e.g. how many system resources, especially CPU time can really be used by the workload over time); Integrating production with a good is critical to maintaining a robust HPC facility: The tutorial will address how to achieve and maintain this delicate balance. It explores what a facility needs to do to thrive in the new millennium.

  19. Distributed Computing Software Building-Blocks for Ubiquitous Computing Societies

    NASA Astrophysics Data System (ADS)

    Kim, K. H. (Kane

    The steady approach of advanced nations toward realization of ubiquitous computing societies has given birth to rapidly growing demands for new-generation distributed computing (DC) applications. Consequently, economic and reliable construction of new-generation DC applications is currently a major issue faced by the software technology research community. What is needed is a new-generation DC software engineering technology which is at least multiple times more effective in constructing new-generation DC applications than the currently practiced technologies are. In particular, this author believes that a new-generation building-block (BB), which is much more advanced than the current-generation DC object that is a small extension of the object model embedded in languages C++, Java, and C#, is needed. Such a BB should enable systematic and economic construction of DC applications that are capable of taking critical actions with 100-microsecond-level or even 10-microsecond-level timing accuracy, fault tolerance, and security enforcement while being easily expandable and taking advantage of all sorts of network connectivity. Some directions considered worth pursuing for finding such BBs are discussed.

  20. An Applet-based Anonymous Distributed Computing System.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  1. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  2. LHCbDirac: distributed computing in LHCb

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, P.; Graciani, R.; Tsaregorodtsev, A.; Closier, J.; Mathe, Z.; Ubeda, M.; Zhelezov, A.; Lanciotti, E.; Romanovskiy, V.; Ciba, K. D.; Casajus, A.; Roiser, S.; Sapunov, M.; Remenska, D.; Bernardoff, V.; Santana, R.; Nandakumar, R.

    2012-12-01

    We present LHCbDirac, an extension of the DIRAC community Grid solution that handles LHCb specificities. The DIRAC software has been developed for many years within LHCb only. Nowadays it is a generic software, used by many scientific communities worldwide. Each community wanting to take advantage of DIRAC has to develop an extension, containing all the necessary code for handling their specific cases. LHCbDirac is an actively developed extension, implementing the LHCb computing model and workflows handling all the distributed computing activities of LHCb. Such activities include real data processing (reconstruction, stripping and streaming), Monte-Carlo simulation and data replication. Other activities are groups and user analysis, data management, resources management and monitoring, data provenance, accounting for user and production jobs. LHCbDirac also provides extensions of the DIRAC interfaces, including a secure web client, python APIs and CLIs. Before putting in production a new release, a number of certification tests are run in a dedicated setup. This contribution highlights the versatility of the system, also presenting the experience with real data processing, data and resources management, monitoring for activities and resources.

  3. Automating usability of ATLAS Distributed Computing resources

    NASA Astrophysics Data System (ADS)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  4. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  5. A Central Processing Facility within a Distributed Data Processing System

    NASA Astrophysics Data System (ADS)

    de Witte, S.; Rispens, S. M.; van Hees, R. M.

    2009-04-01

    In a complex scientific data processing project, where raw satellite data (Level 1) is processed to end products (Level 2), you may need specific expertise from various groups in different locations. Collaboration between these groups can lead to better results and give the opportunity to try several different scientific approaches and choose, objectively, the best result. Furthermore, such a distributed data processing system or DDPS can be used for independent validation before the end products are released. All participating groups need common and specific data products for their processing. This involves many interfaces needing and producing different data products. Without a central storage location all groups involved have to implement their own checking routines and transformations in order to use the data products. A central processing facility, acting as a single point of interface between the DDPS and the main data provider as well as for all groups within the DDPS, can facilitate in collecting all scientific data necessary for high-level processing, transforming the Level 1 input data to a DDPS internally agreed format, checking all data products on integrity, format and validity, distributing these data products within the DDPS, monitoring the whole data distribution chain and distributing all end products to the main data provider. A DDPS has been implemented for ESA's gravity mission, GOCE (Gravity field and steady-state Ocean Circulation Explorer). GOCE's DDPS is called the High-level Processing Facility (HPF) and is part of the GOCE Ground Segment, developed under ESA contract by the European GOCE Gravity consortium (EGG-c). The HPF is set up as a distributed facility consisting of several sub-processing centers for scientific pre-processing, orbit determination, gravity field analysis and validation. The sub-processing facilities are connected through a central node, the Central Processing Facility (CPF). The CPF has been thoroughly tested and is

  6. An Overview of Cloud Computing in Distributed Systems

    NASA Astrophysics Data System (ADS)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  7. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    Doak, Justin

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an

  8. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (ESTSC)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation

  9. Uninstrumented assembly airflow testing in the Annular Flow Distribution facility

    SciTech Connect

    Kielpinski, A.L.

    1992-02-01

    During the Emergency Cooling System phase of a postulated large-break loss of coolant accident (ECS-LOCA), air enters the primary loop and is pumped down the reactor assemblies. One of the experiments performed to support the analysis of this accident was the Annular Flow Distribution (AFD) experiment, conducted in a facility built for this purpose at Babcock and Wilcox Alliance Research Center in Alliance, Ohio. As part of this experiment, a large body of airflow data were acquired in a prototypical mockup of the Mark 22 reactor assembly. This assembly was known as the AFD (or the I-AFD here) reference assembly. The I-AFD assembly was fully prototypical, having been manufactured in SRS`s production fabrication facility. Similar Mark 22 mockup assemblies were tested in several test facilities in the SRS Heat Transfer Laboratory (HTL). Discrepancies were found. The present report documents further work done to address the discrepancy in airflow measurements between the AFD facility and HTL facilities. The primary purpose of this report is to disseminate the data from the U-AFD test, and to compare these test results to the I-AFD data and the U-AT data. A summary table of the test data and the B&W data transmittal letter are included as an attachment to this report. The full data transmittal volume from B&W (including time plots of the various instruments) is included as an appendix to this report. These data are further analyzed by comparing them to two other HTL tests, namely, SPRIHTE 1 and the Single Assembly Test Stand (SATS).

  10. Concept for a distributed processor computer

    NASA Technical Reports Server (NTRS)

    Bogue, P. N.; Burnett, G. J.; Koczela, L. J.

    1970-01-01

    Future generation computer utilizes cell of single metal oxide semiconductor wafer containing general purpose processor section and small memory of approximately 512 words of 16 bits each. Cells are organized into groups and groups interconnected to form computer.

  11. The Argonne Leadership Computing Facility 2010 annual report.

    SciTech Connect

    Drugan, C.

    2011-05-09

    Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale

  12. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  13. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-09-13

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  14. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-08-09

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  15. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  16. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  17. A Generalized Management Information System for Computer Facilities at Educational Institutions.

    ERIC Educational Resources Information Center

    Bowman, Patrick Awalt

    The problem of managing computer facilities at educational institutions is examined. User categories are defined, and the interrelations between user requirements and the goals/objectives of the facility are discussed. Enumerations of the factors that influence computer facility operations is also accomplished. In addition, management information…

  18. Distributed-Computer System Optimizes SRB Joints

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  19. Designing a model to minimize inequities in hemodialysis facilities distribution.

    PubMed

    Salgado, Teresa M; Moles, Rebekah; Benrimoj, Shalom I; Fernandez-Llimos, Fernando

    2011-11-01

    Portugal has an uneven, city-centered bias in the distribution of hemodialysis centers found to contribute to health care inequities. A model has been developed with the aim of minimizing access inequity through the identification of the best possible localization of new hemodialysis facilities. The model was designed under the assumption that individuals from different geographic areas, ceteris paribus, present the same likelihood of requiring hemodialysis in the future. Distances to reach the closest hemodialysis facility were calculated for every municipality lacking one. Regions were scored by aggregating weights of the "individual burden", defined as the burden for an individual living in a region lacking a hemodialysis center to reach one as often as needed, and the "population burden", defined as the burden for the total population living in such a region. The model revealed that the average travelling distance for inhabitants in municipalities without a hemodialysis center is 32 km and that 145,551 inhabitants (1.5%) live more than 60 min away from a hemodialysis center, while 1,393,770 (13.8%) live 30-60 min away. Multivariate analysis showed that the current localization of hemodialysis facilities is associated with major urban areas. The model developed recommends 12 locations for establishing hemodialysis centers that would result in drastically reduced travel for 34 other municipalities, leaving only six (34,800 people) with over 60 min of travel. The application of this model should facilitate the planning of future hemodialysis services as it takes into consideration the potential impact of travel time for individuals in need of dialysis, as well as the logistic arrangements required to transport all patients with end-stage renal disease. The model is applicable in any country and health care planners can opt to weigh these two elements differently in the model according to their priorities. PMID:22109858

  20. Activities and operations of Argonne's Advanced Computing Research Facility: February 1990 through April 1991

    SciTech Connect

    Pieper, G.W.

    1991-05-01

    This report reviews the activities and operations of the Advanced Computing Research Facility (ACRF) from February 1990 through April 1991. The ACRF is operated by the Mathematics and Computer Science Division at Argonne National Laboratory. The facility's principal objective is to foster research in parallel computing. Toward this objective, the ACRF operates experimental advanced computers, supports investigations in parallel computing, and sponsors technology transfer efforts to industry and academia. 5 refs., 1 fig.

  1. Distributed neural computations for embedded sensor networks

    NASA Astrophysics Data System (ADS)

    Peckens, Courtney A.; Lynch, Jerome P.; Pei, Jin-Song

    2011-04-01

    Wireless sensing technologies have recently emerged as an inexpensive and robust method of data collection in a variety of structural monitoring applications. In comparison with cabled monitoring systems, wireless systems offer low-cost and low-power communication between a network of sensing devices. Wireless sensing networks possess embedded data processing capabilities which allow for data processing directly at the sensor, thereby eliminating the need for the transmission of raw data. In this study, the Volterra/Weiner neural network (VWNN), a powerful modeling tool for nonlinear hysteretic behavior, is decentralized for embedment in a network of wireless sensors so as to take advantage of each sensor's processing capabilities. The VWNN was chosen for modeling nonlinear dynamic systems because its architecture is computationally efficient and allows computational tasks to be decomposed for parallel execution. In the algorithm, each sensor collects it own data and performs a series of calculations. It then shares its resulting calculations with every other sensor in the network, while the other sensors are simultaneously exchanging their information. Because resource conservation is important in embedded sensor design, the data is pruned wherever possible to eliminate excessive communication between sensors. Once a sensor has its required data, it continues its calculations and computes a prediction of the system acceleration. The VWNN is embedded in the computational core of the Narada wireless sensor node for on-line execution. Data generated by a steel framed structure excited by seismic ground motions is used for validation of the embedded VWNN model.

  2. Distributed computing environment monitoring and user expectations

    SciTech Connect

    Cottrell, R.L.A.; Logg, C.A.

    1995-11-01

    This paper discusses the growing needs for distributed system monitoring and compares it to current practices. It then goes on to identify the components of distributed system monitoring and shows how they are implemented and successfully used at one site today to address the Local Area Network (LAN), network services and applications, the Wide Area Network (WAN), and host monitoring. It shows how this monitoring can be used to develop realistic service level expectations and also identifies the costs. Finally, the paper briefly discusses the future challenges in network monitoring.

  3. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    SciTech Connect

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C.

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms

  4. A lightweight communication library for distributed computing

    NASA Astrophysics Data System (ADS)

    Groen, Derek; Rieder, Steven; Grosso, Paola; de Laat, Cees; Portegies Zwart, Simon

    2010-01-01

    We present MPWide, a platform-independent communication library for performing message passing between computers. Our library allows coupling of several local message passing interface (MPI) applications through a long-distance network and is specifically optimized for such communications. The implementation is deliberately kept lightweight and platform independent, and the library can be installed and used without administrative privileges. The only requirements are a C++ compiler and at least one open port to a wide-area network on each site. In this paper we present the library, describe the user interface, present performance tests and apply MPWide in a large-scale cosmological N-body simulation on a network of two computers, one in Amsterdam and the other in Tokyo.

  5. SynapSense Wireless Environmental Monitoring System of the RHIC & ATLAS Computing Facility at BNL

    NASA Astrophysics Data System (ADS)

    Casella, K.; Garcia, E.; Hogue, R.; Hollowell, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.

    2014-06-01

    RHIC & ATLAS Computing Facility (RACF) at BNL is a 15000 sq. ft. facility hosting the IT equipment of the BNL ATLAS WLCG Tier-1 site, offline farms for the STAR and PHENIX experiments operating at the Relativistic Heavy Ion Collider (RHIC), the BNL Cloud installation, various Open Science Grid (OSG) resources, and many other small physics research oriented IT installations. The facility originated in 1990 and grew steadily up to the present configuration with 4 physically isolated IT areas with the maximum rack capacity of about 1000 racks and the total peak power consumption of 1.5 MW. In June 2012 a project was initiated with the primary goal to replace several environmental monitoring systems deployed earlier within RACF with a single commercial hardware and software solution by SynapSense Corporation based on wireless sensor groups and proprietary SynapSense™ MapSense™ software that offers a unified solution for monitoring the temperature and humidity within the rack/CRAC units as well as pressure distribution underneath the raised floor across the entire facility. The deployment was completed successfully in 2013. The new system also supports a set of additional features such as capacity planning based on measurements of total heat load, power consumption monitoring and control, CRAC unit power consumption optimization based on feedback from the temperature measurements and overall power usage efficiency estimations that are not currently implemented within RACF but may be deployed in the future.

  6. Activities and operations of the Advanced Computing Research Facility, October 1986-October 1987

    SciTech Connect

    Pieper, G.W.

    1987-01-01

    This paper contains a description of the work being carried out at the advanced computing research facility at Argonne National Laboratory. Topics covered are upgrading of computers, networking changes, algorithms, parallel programming, programming languages, and user training. (LSP)

  7. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  8. Space power distribution system technology. Volume 3: Test facility design

    NASA Technical Reports Server (NTRS)

    Decker, D. K.; Cannady, M. D.; Cassinelli, J. E.; Farber, B. F.; Lurie, C.; Fleck, G. W.; Lepisto, J. W.; Messner, A.; Ritterman, P. F.

    1983-01-01

    The AMPS test facility is a major tool in the attainment of more economical space power. The ultimate goals of the test facility, its primary functional requirements and conceptual design, and the major equipment it contains are discussed.

  9. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  10. Activities and operations of the Advanced Computing Research Facility, January 1989--January 1990

    SciTech Connect

    Pieper, G.W.

    1990-02-01

    This report reviews the activities and operations of the Advanced Computing Research Facility (ACRF) for the period January 1, 1989, through January 31, 1990. The ACRF is operated by the Mathematics and Computer Science Division at Argonne National Laboratory. The facility's principal objective is to foster research in parallel computing. Toward this objective, the ACRF continues to operate experimental advanced computers and to sponsor new technology transfer efforts and new research projects. 4 refs., 8 figs.

  11. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  12. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  13. Improvement of the Computing - Related Procurement Process at a Government Research Facility

    SciTech Connect

    Gittins, C.

    2000-04-03

    The purpose of the project was to develop, implement, and market value-added services through the Computing Resource Center in an effort to streamline computing-related procurement processes across the Lawrence Livermore National Laboratory (LLNL). The power of the project was in focusing attention on and value of centralizing the delivery of computer related products and services to the institution. The project required a plan and marketing strategy that would drive attention to the facility's value-added offerings and services. A significant outcome of the project has been the change in the CRC internal organization. The realignment of internal policies and practices, together with additions to its product and service offerings has brought an increased focus to the facility. This movement from a small, fractious organization into one that is still small yet well organized and focused on its mission and goals has been a significant transition. Indicative of this turnaround was the sharing of information. One-on-one and small group meetings, together with statistics showing work activity was invaluable in gaining support for more equitable workload distribution, and the removal of blame and finger pointing. Sharing monthly reports on sales and operating costs also had a positive impact.

  14. Clock distribution system for digital computers

    DOEpatents

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  15. Nonlinear Fluid Computations in a Distributed Environment

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.; Smith, Merritt H.

    1995-01-01

    The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.

  16. Distributed sensor networks with collective computation

    SciTech Connect

    Lanman, D. R.

    2001-01-01

    Simulations of a network of N sensors have been performed. The simulation space contains a number of sound sources and a large number of sensors. Each sensor is equipped with an omni-directional microphone and is capable of measuring only the time of arrival of a signal. Sensors are able to wirelessly transmit and receive packets of information, and have some computing power. The sensors were programmed to merge all information (received packets as well as local measurements) into a 'world view' for that node. This world view is then transmitted. In this way, information can slowly diffuse across the network. One node was monitored in the network as a proxy for when information had diffused across the network. Simulations demonstrated that the energy expended per sensor per time step was approximately independent of N.

  17. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  18. Facility optimization to improve activation rate distributions during IVNAA

    PubMed Central

    Ebrahimi Khankook, Atiyeh; Rafat Motavalli, Laleh; Miri Hakimabad, Hashem

    2013-01-01

    Currently, determination of body composition is the most useful method for distinguishing between certain diseases. The prompt-gamma in vivo neutron activation analysis (IVNAA) facility for non-destructive elemental analysis of the human body is the gold standard method for this type of analysis. In order to obtain accurate measurements using the IVNAA system, the activation probability in the body must be uniform. This can be difficult to achieve, as body shape and body composition affect the rate of activation. The aim of this study was to determine the optimum pre-moderator, in terms of material for attaining uniform activation probability with a CV value of about 10% and changing the collimator role to increase activation rate within the body. Such uniformity was obtained with a high thickness of paraffin pre-moderator, however, because of increasing secondary photon flux received by the detectors it was not an appropriate choice. Our final calculations indicated that using two paraffin slabs with a thickness of 3 cm as a pre-moderator, in the presence of 2 cm Bi on the collimator, achieves a satisfactory distribution of activation rate in the body. PMID:23386375

  19. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  20. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  1. UTILIZATION OF COMPUTER FACILITIES IN THE MATHEMATICS AND BUSINESS CURRICULUM IN A LARGE SUBURBAN HIGH SCHOOL.

    ERIC Educational Resources Information Center

    RENO, MARTIN; AND OTHERS

    A STUDY WAS UNDERTAKEN TO EXPLORE IN A QUALITATIVE WAY THE POSSIBLE UTILIZATION OF COMPUTER AND DATA PROCESSING METHODS IN HIGH SCHOOL EDUCATION. OBJECTIVES WERE--(1) TO ESTABLISH A WORKING RELATIONSHIP WITH A COMPUTER FACILITY SO THAT ABLE STUDENTS AND THEIR TEACHERS WOULD HAVE ACCESS TO THE FACILITIES, (2) TO DEVELOP A UNIT FOR THE UTILIZATION…

  2. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  3. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  4. Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.

    PubMed

    Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2016-03-01

    The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries. PMID:26510693

  5. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    SciTech Connect

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2

  6. Computation and Optimization of Dose Distributions for Rotational Stereotactic Radiosurgery

    NASA Astrophysics Data System (ADS)

    Fox, Timothy Harold

    1994-01-01

    The stereotactic radiosurgery technique presented in this work is the patient rotator method which rotates the patient in a sitting position with a stereotactic head frame attached to the skull while collimated non-coplanar radiation beams from a 6 MV medical linear accelerator are delivered to the target point. The hypothesis of this dissertation is that accurate, three-dimensional dose distributions can be computed and optimized for the patient rotator method used in stereotactic radiosurgery. This dissertation presents research results in three areas related to computing and optimizing dose distributions for the patient rotator method. A three-dimensional dose model was developed to calculate the dose at any point in the cerebral cortex using a circular and adjustable collimator system and the geometry of the radiation beam with respect to the target point. The computed dose distributions compared to experimental measurements had an average maximum deviation of <0.7 mm for the relative isodose distributions greater than 50%. A system was developed to qualitatively and quantitatively visualize the computed dose distributions with patient anatomy. A registration method was presented for transforming each dataset to a common reference system. A method for computing the intersections of anatomical contour's boundaries was developed to calculate dose-volume information. The system efficiently and accurately reduced the large computed, volumetric sets of dose data, medical images, and anatomical contours to manageable images and graphs. A computer-aided optimization method was developed for rigorously selecting beam angles and weights for minimizing the dose to normal tissue. Linear programming was applied as the optimization method. The computed optimal beam angles and weights for a defined objective function and dose constraints exhibited a superior dose distribution compared to a standard plan. The developed dose model, qualitative and quantitative visualization

  7. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  8. Computational determination of absorbed dose distributions from gamma ray sources

    NASA Astrophysics Data System (ADS)

    Zhou, Chuanyu; Inanc, Feyzi

    2001-04-01

    A biomedical procedure known as brachytherapy involves insertion of many radioactive seeds into a sick gland for eliminating sick tissue. For such implementations, the spatial distribution of absorbed dose is very important. A simulation tool has been developed to determine the spatial distribution of absorbed dose in heterogeneous environments where the gamma ray source consists of many small internal radiation emitters. The computation is base on integral transport method and the computations are done in a parallel fashion. Preliminary results involving 137Cs and 125I sources surrounded by water and comparison of the results to the experimental and computational data available in the literature are presented.

  9. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  10. Parallel and Distributed Computational Fluid Dynamics: Experimental Results and Challenges

    NASA Technical Reports Server (NTRS)

    Djomehri, Mohammad Jahed; Biswas, R.; VanderWijngaart, R.; Yarrow, M.

    2000-01-01

    This paper describes several results of parallel and distributed computing using a large scale production flow solver program. A coarse grained parallelization based on clustering of discretization grids combined with partitioning of large grids for load balancing is presented. An assessment is given of its performance on distributed and distributed-shared memory platforms using large scale scientific problems. An experiment with this solver, adapted to a Wide Area Network execution environment is presented. We also give a comparative performance assessment of computation and communication times on both the tightly and loosely-coupled machines.

  11. Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Devesas Campos, M.; Tarragon Cros, J.; Gaidioz, B.; Karavakis, E.; Kokoszkiewicz, L.; Lanciotti, E.; Maier, G.; Ollivier, W.; Nowotka, M.; Rocha, R.; Sadykov, T.; Saiz, P.; Sargsyan, L.; Sidorova, I.; Tuckett, D.

    2011-12-01

    LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.

  12. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  13. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation facility... license and Certificate of Insurance to the Commander, Naval Facilities Engineering Command or his... Facilities Engineering Command or his designated representative. (1) Upon receipt, the Commander,...

  14. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation facility... license and Certificate of Insurance to the Commander, Naval Facilities Engineering Command or his... Facilities Engineering Command or his designated representative. (1) Upon receipt, the Commander,...

  15. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation facility... license and Certificate of Insurance to the Commander, Naval Facilities Engineering Command or his... Facilities Engineering Command or his designated representative. (1) Upon receipt, the Commander,...

  16. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation facility... license and Certificate of Insurance to the Commander, Naval Facilities Engineering Command or his... Facilities Engineering Command or his designated representative. (1) Upon receipt, the Commander,...

  17. 32 CFR 766.8 - Procedure for review, approval, execution and distribution of aviation facility licenses.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation facility... license and Certificate of Insurance to the Commander, Naval Facilities Engineering Command or his... Facilities Engineering Command or his designated representative. (1) Upon receipt, the Commander,...

  18. 41 CFR 101-26.503 - Multiple award schedule purchases made by GSA supply distribution facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... purchases made by GSA supply distribution facilities. 101-26.503 Section 101-26.503 Public Contracts and... SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND PROGRAM 26.5-GSA Procurement Programs § 101-26.503 Multiple award schedule purchases made by GSA supply distribution facilities. GSA supply...

  19. 41 CFR 101-26.503 - Multiple award schedule purchases made by GSA supply distribution facilities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... purchases made by GSA supply distribution facilities. 101-26.503 Section 101-26.503 Public Contracts and... SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND PROGRAM 26.5-GSA Procurement Programs § 101-26.503 Multiple award schedule purchases made by GSA supply distribution facilities. GSA supply...

  20. 41 CFR 101-26.503 - Multiple award schedule purchases made by GSA supply distribution facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... purchases made by GSA supply distribution facilities. 101-26.503 Section 101-26.503 Public Contracts and... SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND PROGRAM 26.5-GSA Procurement Programs § 101-26.503 Multiple award schedule purchases made by GSA supply distribution facilities. GSA supply...

  1. 41 CFR 101-26.503 - Multiple award schedule purchases made by GSA supply distribution facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... purchases made by GSA supply distribution facilities. 101-26.503 Section 101-26.503 Public Contracts and... SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND PROGRAM 26.5-GSA Procurement Programs § 101-26.503 Multiple award schedule purchases made by GSA supply distribution facilities. GSA supply...

  2. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  3. National facility for advanced computational science: A sustainable path to scientific discovery

    SciTech Connect

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  4. Issues and recommendations associated with distributed computation and data management systems for the space sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The primary purpose of the report is to explore management approaches and technology developments for computation and data management systems designed to meet future needs in the space sciences.The report builds on work presented in previous reports on solar-terrestrial and planetary reports, broadening the outlook to all of the space sciences, and considering policy issues aspects related to coordiantion between data centers, missions, and ongoing research activities, because it is perceived that the rapid growth of data and the wide geographic distribution of relevant facilities will present especially troublesome problems for data archiving, distribution, and analysis.

  5. New security infrastructure model for distributed computing systems

    NASA Astrophysics Data System (ADS)

    Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.

    2016-02-01

    At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.

  6. Nonlinear structural analysis on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for the nonlinear static and postbuckling analyses of large complex structures on massively parallel computers. The strategy is designed for distributed-memory, message-passing parallel computer systems. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a nested dissection (or multilevel substructuring) ordering scheme; (3) parallel assembly of global matrices; and (4) a parallel sparse equation solver. The effectiveness of the strategy is assessed by applying it to thermo-mechanical postbuckling analyses of stiffened composite panels with cutouts, and nonlinear large-deflection analyses of HSCT models on Intel Paragon XP/S computers. The numerical studies presented demonstrate the advantages of nested dissection-based solvers over traditional skyline-based solvers on distributed memory machines.

  7. Exact score distribution computation for ontological similarity searches

    PubMed Central

    2011-01-01

    Background Semantic similarity searches in ontologies are an important component of many bioinformatic algorithms, e.g., finding functionally related proteins with the Gene Ontology or phenotypically similar diseases with the Human Phenotype Ontology (HPO). We have recently shown that the performance of semantic similarity searches can be improved by ranking results according to the probability of obtaining a given score at random rather than by the scores themselves. However, to date, there are no algorithms for computing the exact distribution of semantic similarity scores, which is necessary for computing the exact P-value of a given score. Results In this paper we consider the exact computation of score distributions for similarity searches in ontologies, and introduce a simple null hypothesis which can be used to compute a P-value for the statistical significance of similarity scores. We concentrate on measures based on Resnik's definition of ontological similarity. A new algorithm is proposed that collapses subgraphs of the ontology graph and thereby allows fast score distribution computation. The new algorithm is several orders of magnitude faster than the naive approach, as we demonstrate by computing score distributions for similarity searches in the HPO. It is shown that exact P-value calculation improves clinical diagnosis using the HPO compared to approaches based on sampling. Conclusions The new algorithm enables for the first time exact P-value calculation via exact score distribution computation for ontology similarity searches. The approach is applicable to any ontology for which the annotation-propagation rule holds and can improve any bioinformatic method that makes only use of the raw similarity scores. The algorithm was implemented in Java, supports any ontology in OBO format, and is available for non-commercial and academic usage under: https://compbio.charite.de/svn/hpo/trunk/src/tools/significance/ PMID:22078312

  8. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  9. Computation of glint, glare, and solar irradiance distribution

    SciTech Connect

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  10. Distributed computer taxonomy based on O/S structure

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.

    1985-01-01

    The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.

  11. An optimization model for energy generation and distribution in a dynamic facility

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1981-01-01

    An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.

  12. Distributed computing system with dual independent communications paths between computers and employing split tokens

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  13. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  14. Data distribution in the NBS Automated Manufacturing Research Facility

    NASA Technical Reports Server (NTRS)

    Mitchell, M. J.; Barkmeyer, E. J.

    1984-01-01

    The Automated Manufacturing Research Facility (AMRF) at the National Bureau of Standards was constructed as a testbed for research in the automation of small batch maufacturing, in particular for systems producing machined parts in lots of 1000 or less. Potential standard interfaces between existing and future components of small batch of factory floor metrology in an automated environment, delivering proven measurement techniques and standard reference materails industry to are identified. Commercially available product are used to construct the facility to expedite transfer of research results into the private sector.

  15. MPWide: Light-weight communication library for distributed computing

    NASA Astrophysics Data System (ADS)

    Groen, Derek; Rieder, Steven; Grosso, Paola; de Laat, Cees; Portegies Zwart, Simon

    2012-12-01

    MPWide is a light-weight communication library for distributed computing. It is specifically developed to allow message passing over long-distance networks using path-specific optimizations. An early version of MPWide was used in the Gravitational Billion Body Project to allow simulations across multiple supercomputers.

  16. SAGA: A standardized access layer to heterogeneous Distributed Computing Infrastructure

    NASA Astrophysics Data System (ADS)

    Merzky, Andre; Weidner, Ole; Jha, Shantenu

    2015-09-01

    Distributed Computing Infrastructure is characterized by interfaces that are heterogeneous-syntactically and semantically. SAGA represents the most comprehensive community effort to date to address the heterogeneity by defining a simple, uniform access layer. In this paper, we describe the basic concepts underpinning its design and development. We also discuss RADICAL-SAGA which is the most widely used implementation of SAGA.

  17. Chandrasekhar equations and computational algorithms for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Ito, K.; Powers, R. K.

    1984-01-01

    The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.

  18. Distribution and Efficacy of Aerosol Insecticides in Commercial Facilities

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerosol insecticides are being viewed as a potential alternative to fumigations in commercial milling, processing, and storage facilities. Although there are a number of insecticides and delivery systems available for use, there are little published data regarding efficacy and performance in actual ...

  19. Distributed computation of graphics primitives on a transputer network

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.

  20. A fault detection service for wide area distributed computations.

    SciTech Connect

    Stelling, P.

    1998-06-09

    The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to tradeoff timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver.

  1. Effects of wind-energy facilities on grassland bird distributions

    USGS Publications Warehouse

    Shaffer, Jill A.; Buhl, Deb

    2016-01-01

    The contribution of renewable energy to meet worldwide demand continues to grow. Wind energy is one of the fastest growing renewable sectors, but new wind facilities are often placed in prime wildlife habitat. Long-term studies that incorporate a rigorous statistical design to evaluate the effects of wind facilities on wildlife are rare. We conducted a before-after-control-impact (BACI) assessment to determine if wind facilities placed in native mixed-grass prairies displaced breeding grassland birds. During 2003–2012, we monitored changes in bird density in 3 study areas in North Dakota and South Dakota (U.S.A.). We examined whether displacement or attraction occurred 1 year after construction (immediate effect) and the average displacement or attraction 2–5 years after construction (delayed effect). We tested for these effects overall and within distance bands of 100, 200, 300, and >300 m from turbines. We observed displacement for 7 of 9 species. One species was unaffected by wind facilities and one species exhibited attraction. Displacement and attraction generally occurred within 100 m and often extended up to 300 m. In a few instances, displacement extended beyond 300 m. Displacement and attraction occurred 1 year after construction and persisted at least 5 years. Our research provides a framework for applying a BACI design to displacement studies and highlights the erroneous conclusions that can be made without the benefit of adopting such a design. More broadly, species-specific behaviors can be used to inform management decisions about turbine placement and the potential impact to individual species. Additionally, the avoidance distance metrics we estimated can facilitate future development of models evaluating impacts of wind facilities under differing land-use scenarios.

  2. Parallel matrix transpose algorithms on distributed memory concurrent computers

    SciTech Connect

    Choi, Jaeyoung; Dongarra, J. |; Walker, D.W.

    1994-12-31

    This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. We assume that the matrix is distributed over a P {times} Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C = A {center_dot} B, the algorithms are used to compute parallel multiplications of transposed matrices, C = A{sup T} {center_dot} B{sup T}, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer.

  3. Status Of The National Ignition Campaign And National Ignition Facility Integrated Computer Control System

    SciTech Connect

    Lagin, L; Brunton, G; Carey, R; Demaret, R; Fisher, J; Fishler, B; Ludwigsen, P; Marshall, C; Reed, R; Shelton, R; Townsend, S

    2011-03-18

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility that will contains a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber with room for multiple experimental diagnostics. NIF is the world's largest and most energetic laser experimental system, providing a scientific center to study inertial confinement fusion (ICF) and matter at extreme energy densities and pressures. NIF's laser beams are designed to compress fusion targets to conditions required for thermonuclear burn. NIF is operated by the Integrated Computer Control System (ICCS) in an object-oriented, CORBA-based system distributed among over 1800 frontend processors, embedded controllers and supervisory servers. In the fall of 2010, a set of experiments began with deuterium and tritium filled targets as part of the National Ignition Campaign (NIC). At present, all 192 laser beams routinely fire to target chamber center to conduct fusion and high energy density experiments. During the past year, the control system was expanded to include automation of cryogenic target system and over 20 diagnostic systems to support fusion experiments were deployed and utilized in experiments in the past year. This talk discusses the current status of the NIC and the plan for controls and information systems to support these experiments on the path to ignition.

  4. Using high performance interconnects in a distributed computing and mass storage environment

    SciTech Connect

    Ernst, M.

    1994-12-31

    Detector Collaborations of the HERA Experiments typically involve more than 500 physicists from a few dozen institutes. These physicists require access to large amounts of data in a fully transparent manner. Important issues include Distributed Mass Storage Management Systems in a Distributed and Heterogeneous Computing Environment. At the very center of a distributed system, including tens of CPUs and network attached mass storage peripherals are the communication links. Today scientists are witnessing an integration of computing and communication technology with the {open_quote}network{close_quote} becoming the computer. This contribution reports on a centrally operated computing facility for the HERA Experiments at DESY, including Symmetric Multiprocessor Machines (84 Processors), presently more than 400 GByte of magnetic disk and 40 TB of automoted tape storage, tied together by a HIPPI {open_quote}network{close_quote}. Focussing on the High Performance Interconnect technology, details will be provided about the HIPPI based {open_quote}Backplane{close_quote} configured around a 20 Gigabit/s Multi Media Router and the performance and efficiency of the related computer interfaces.

  5. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  6. AGIS: Evolution of Distributed Computing information system for ATLAS

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  7. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We

  8. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGESBeta

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; et al

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system

  9. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    SciTech Connect

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as

  10. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  11. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. PMID:22823593

  12. EFFECTS OF MIXING AND AGING ON WATER QUALITY IN DISTRIBUTION SYSTEM STORAGE FACILITIES

    EPA Science Inventory

    Aging of water in distribution system storage facilities can lead to deterioration of the water quality due to loss of disinfectant residual and bacterial regrowth. Facilities should be operated to insure that the age of the water is not excessive taking into account the quality...

  13. Survey of Computer Facilities in Minnesota and North Dakota.

    ERIC Educational Resources Information Center

    MacGregor, Donald

    In order to attain a better understanding of the data processing manpower needs of business and industry, a survey instrument was designed and mailed to 570 known and possible computer installations in the Minnesota/North Dakota area. The survey was conducted during the spring of 1975, and concentrated on the kinds of equipment and computer…

  14. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... the Federal Register of October 29, 2007 (72 FR 61171), FDA announced the availability of the draft... HUMAN SERVICES Food and Drug Administration Guidance for Industry: Blood Establishment Computer System... ``Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility'' dated...

  15. Common Accounting System for Monitoring the ATLAS Distributed Computing Resources

    NASA Astrophysics Data System (ADS)

    Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration

    2014-06-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  16. Distributed Computer Networks in Support of Complex Group Practices

    PubMed Central

    Wess, Bernard P.

    1978-01-01

    The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.

  17. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  18. Radar data processing using a distributed computational system

    NASA Astrophysics Data System (ADS)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  19. Influence of computational fluid dynamics on experimental aerospace facilities: A fifteen year projection

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.

  20. BRaTS@Home and BOINC Distributed Computing for Parallel Computation

    NASA Astrophysics Data System (ADS)

    Coss, David Raymond; Flores, R.

    2008-09-01

    Utilizing Internet connectivity, the Berkeley Open Infrastructure for Network Computing (BOINC) provides parallel computing power without the expense of purchasing a computer cluster. BOINC, written in C++, is an open source system, acting as an intermediary between the project server and the BOINC client on the volunteer's computer. By using the idle time of computers of volunteer participants, BOINC allows scientists to build a computer cluster at the price of one server. As an example of such computational capabilities, I have developed BRaTS@Home, standing for BRaTS Ray Trace Simulation, using the BOINC distributed computing system to perform gravitational lensing ray-tracing simulations. Though BRaTS@Home is only one of many projects, 182 users in 26 different countries participate in the project. From June 2007 to April 2008, 795 computers have connected to the project server, providing an average computing power of 1.1 billion floating point operations per second(FLOPS), while the entire BOINC system averages over 1000 teraFLOPS, as of April 2008. Preliminary results of the project's gravitational ray-tracing simulations will be shown.

  1. Semiquantum key distribution with secure delegated quantum computation

    NASA Astrophysics Data System (ADS)

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution.

  2. Semiquantum key distribution with secure delegated quantum computation.

    PubMed

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a "classical" party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  3. Semiquantum key distribution with secure delegated quantum computation

    PubMed Central

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  4. Elemental: a new framework for distributed memory dense matrix computations.

    SciTech Connect

    Romero, N.; Poulson, J.; Marker, B.; Hammond, J.; Van de Geijn, R.

    2012-02-14

    Parallelizing dense matrix computations to distributed memory architectures is a well-studied subject and generally considered to be among the best understood domains of parallel computing. Two packages, developed in the mid 1990s, still enjoy regular use: ScaLAPACK and PLAPACK. With the advent of many-core architectures, which may very well take the shape of distributed memory architectures within a single processor, these packages must be revisited since the traditional MPI-based approaches will likely need to be extended. Thus, this is a good time to review lessons learned since the introduction of these two packages and to propose a simple yet effective alternative. Preliminary performance results show the new solution achieves competitive, if not superior, performance on large clusters.

  5. Accuracy of subsurface temperature distributions computed from pulsed photothermal radiometry.

    PubMed

    Smithies, D J; Milner, T E; Tanenbaum, B S; Goodman, D M; Nelson, J S

    1998-09-01

    Pulsed photothermal radiometry (PPTR) is a non-contact method for determining the temperature increase in subsurface chromophore layers immediately following pulsed laser irradiation. In this paper the inherent limitations of PPTR are identified. A time record of infrared emission from a test material due to laser heating of a subsurface chromophore layer is calculated and used as input data for a non-negatively constrained conjugate gradient algorithm. Position and magnitude of temperature increase in a model chromophore layer immediately following pulsed laser irradiation are computed. Differences between simulated and computed temperature increase are reported as a function of thickness, depth and signal-to-noise ratio (SNR). The average depth of the chromophore layer and integral of temperature increase in the test material are accurately predicted by the algorithm. When the thickness/depth ratio is less than 25%, the computed peak temperature increase is always significantly less than the true value. Moreover, the computed thickness of the chromophore layer is much larger than the true value. The accuracy of the computed subsurface temperature distribution is investigated with the singular value decomposition of the kernel matrix. The relatively small number of right singular vectors that may be used (8% of the rank of the kernel matrix) to represent the simulated temperature increase in the test material limits the accuracy of PPTR. We show that relative error between simulated and computed temperature increase is essentially constant for a particular thickness/depth ratio. PMID:9755938

  6. Integrating Xgrid into the HENP distributed computing model

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  7. Multi-VO support in IHEP's distributed computing environment

    NASA Astrophysics Data System (ADS)

    Yan, T.; Suo, B.; Zhao, X. H.; Zhang, X. M.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Inspired by the success of BESDIRAC, the distributed computing environment based on DIRAC for BESIII experiment, several other experiments operated by Institute of High Energy Physics (IHEP), such as Circular Electron Positron Collider (CEPC), Jiangmen Underground Neutrino Observatory (JUNO), Large High Altitude Air Shower Observatory (LHAASO) and Hard X-ray Modulation Telescope (HXMT) etc, are willing to use DIRAC to integrate the geographically distributed computing resources available by their collaborations. In order to minimize manpower and hardware cost, we extended the BESDIRAC platform to support multi-VO scenario, instead of setting up a self-contained distributed computing environment for each VO. This makes DIRAC as a service for the community of those experiments. To support multi-VO, the system architecture of BESDIRAC is adjusted for scalability. The VOMS and DIRAC servers are reconfigured to manage users and groups belong to several VOs. A lightweight storage resource manager StoRM is employed as the central SE to integrate local and grid data. A frontend system is designed for user's massive job splitting, submission and management, with plugins to support new VOs. A monitoring and accounting system is also considered to easy the system administration and VO related resources usage accounting.

  8. Algorithm-dependent fault tolerance for distributed computing

    SciTech Connect

    P. D. Hough; M. e. Goldsby; E. J. Walsh

    2000-02-01

    Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.

  9. Distributing Data from Desktop to Hand-Held Computers

    NASA Technical Reports Server (NTRS)

    Elmore, Jason L.

    2005-01-01

    A system of server and client software formats and redistributes data from commercially available desktop to commercially available hand-held computers via both wired and wireless networks. This software is an inexpensive means of enabling engineers and technicians to gain access to current sensor data while working in locations in which such data would otherwise be inaccessible. The sensor data are first gathered by a data-acquisition server computer, then transmitted via a wired network to a data-distribution computer that executes the server portion of the present software. Data in all sensor channels -- both raw sensor outputs in millivolt units and results of conversion to engineering units -- are made available for distribution. Selected subsets of the data are transmitted to each hand-held computer via the wired and then a wireless network. The selection of the subsets and the choice of the sequences and formats for displaying the data is made by means of a user interface generated by the client portion of the software. The data displayed on the screens of hand-held units can be updated at rates from 1 to

  10. Distributed Computation Resources for Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Doutriaux, C.; Williams, D. N.

    2014-12-01

    The Intergovernmental Panel on Climate Change (IPCC), prompted by the United Nations General Assembly, has published a series of papers in their Fifth Assessment Report (AR5) on processes, impacts, and mitigations of climate change in 2013. The science used in these reports was generated by an international group of domain experts. They studied various scenarios of climate change through the use of highly complex computer models to simulate the Earth's climate over long periods of time. The resulting total data of approximately five petabytes are stored in a distributed data grid known as the Earth System Grid Federation (ESGF). Through the ESGF, consumers of the data can find and download data with limited capabilities for server-side processing. The Sixth Assessment Report (AR6) is already in the planning stages and is estimated to create as much as two orders of magnitude more data than the AR5 distributed archive. It is clear that data analysis capabilities currently in use will be inadequate to allow for the necessary science to be done with AR6 data—the data will just be too big. A major paradigm shift from downloading data to local systems to perform data analytics must evolve to moving the analysis routines to the data and performing these computations on distributed platforms. In preparation for this need, the ESGF has started a Compute Working Team (CWT) to create solutions that allow users to perform distributed, high-performance data analytics on the AR6 data. The team will be designing and developing a general Application Programming Interface (API) to enable highly parallel, server-side processing throughout the ESGF data grid. This API will be integrated with multiple analysis and visualization tools, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), netCDF Operator (NCO), and others. This presentation will provide an update on the ESGF CWT's overall approach toward enabling the necessary storage proximal computational

  11. Secure distributed genome analysis for GWAS and sequence comparison computation

    PubMed Central

    2015-01-01

    Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307

  12. Activities and operations of the Advanced Computing Research Facility, July-October 1986

    SciTech Connect

    Pieper, G.W.

    1986-01-01

    Research activities and operations of the Advanced Computing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advanced computer systems at the Advanced Computing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

  13. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    SciTech Connect

    Graf, F.A. Jr.

    1995-02-27

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System`s pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System.

  14. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  15. Computer control and data acquisition system for the R. F. Test Facility

    SciTech Connect

    Stewart, K.A.; Burris, R.D.; Mankin, J.B.; Thompson, D.H.

    1986-01-01

    The Radio Frequency Test Facility (RFTF) at Oak Ridge National Laboratory, used to test and evaluate high-power ion cyclotron resonance heating (ICRH) systems and components, is monitored and controlled by a multicomponent computer system. This data acquisition and control system consists of three major hardware elements: (1) an Allen-Bradley PLC-3 programmable controller; (2) a VAX 11/780 computer; and (3) a CAMAC serial highway interface. Operating in LOCAL as well as REMOTE mode, the programmable logic controller (PLC) performs all the control functions of the test facility. The VAX computer acts as the operator's interface to the test facility by providing color mimic panel displays and allowing input via a trackball device. The VAX also provides archiving of trend data acquired by the PLC. Communications between the PLC and the VAX are via the CAMAC serial highway. Details of the hardware, software, and the operation of the system are presented in this paper.

  16. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    NASA Astrophysics Data System (ADS)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  17. Computer/information security design approaches for Complex 21/Reconfiguration facilities

    SciTech Connect

    Hunteman, W.J.; Zack, N.R.; Jaeger, C.D.

    1993-08-01

    Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security addressed in the integrated design effort, uranium/lithium, plutonium, plutonium high explosive/assembly facilities.

  18. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  19. Next generation database relational solutions for ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    Dimitrov, G.; Maeno, T.; Garonne, V.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions to arrive at the best relational and physical database model for performance and scalability in order to be ready for deployment and operation in 2014.

  20. Parallel matrix transpose algorithms on distributed memory concurrent computers

    SciTech Connect

    Choi, J.; Walker, D.W.; Dongarra, J.J. |

    1993-10-01

    This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. It is assumed that the matrix is distributed over a P x Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The communication schemes of the algorithms are determined by the greatest common divisor (GCD) of P and Q. If P and Q are relatively prime, the matrix transpose algorithm involves complete exchange communication. If P and Q are not relatively prime, processors are divided into GCD groups and the communication operations are overlapped for different groups of processors. Processors transpose GCD wrapped diagonal blocks simultaneously, and the matrix can be transposed with LCM/GCD steps, where LCM is the least common multiple of P and Q. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C = A{center_dot}B, the algorithms are used to compute parallel multiplications of transposed matrices, C = A{sup T}{center_dot}B{sup T}, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer.

  1. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Uram, Thomas D.; LeCompte, Thomas J.; Benjamin, D.

    2014-06-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  2. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    SciTech Connect

    Yelick, Kathy

    2012-01-01

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  3. LBNL Computational Research & Theory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2013-05-29

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  4. LBNL Computational Research & Theory Facility Groundbreaking - Full Press Conference. Feb 1st, 2012

    SciTech Connect

    Yelick, Kathy

    2012-01-01

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  5. LBNL Computational Research and Theory Facility Groundbreaking. February 1st, 2012

    ScienceCinema

    Yelick, Kathy

    2013-05-29

    Energy Secretary Steven Chu, along with Berkeley Lab and UC leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility yesterday. The CRT will be at the forefront of high-performance supercomputing research and be DOE's most efficient facility of its kind. Joining Secretary Chu as speakers were Lab Director Paul Alivisatos, UC President Mark Yudof, Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences, Kathy Yelick, and Berkeley Mayor Tom Bates joined in the shovel ceremony.

  6. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 1; Steady Predictions

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; Ahuja, Vineet; Hosangadi, Ashvin

    2008-01-01

    levels in CFD based flowpath modeling of the facility. The analyses tools used here expand on the multi-element unstructured CFD which has been tailored and validated for impingement dynamics of dry plumes, complex valve/feed systems, and high pressure propellant delivery systems used in engine and component test stands at NASA SSC. The analyses performed in the evaluation of the sub-scale diffuser facility explored several important factors that influence modeling and understanding of facility operation such as (a) importance of modeling the facility with Real Gas approximation, (b) approximating the cluster of steam ejector nozzles as a single annular nozzle, (c) existence of mixed subsonic/supersonic flow downstream of the turning duct, and (d) inadequacy of two-equation turbulence models in predicting the correct pressurization in the turning duct and expansion of the second stage steam ejectors. The procedure used for modeling the facility was as follows: (i) The engine, test cell and first stage ejectors were simulated with an axisymmetric approximation (ii) the turning duct, second stage ejectors and the piping downstream of the second stage ejectors were analyzed with a three-dimensional simulation utilizing a half-plane symmetry approximation. The solution i.e. primitive variables such as pressure, velocity components, temperature and turbulence quantities were passed from the first computational domain and specified as a supersonic boundary condition for the second simulation. (iii) The third domain comprised of the exit diffuser and the region in the vicinity of the facility (primary included to get the correct shock structure at the exit of the facility and entrainment characteristics). The first set of simulations comprising the engine, test cell and first stage ejectors was carried out both as a turbulent real gas calculation as well as a turbulent perfect gas calculation. A comparison for the two cases (Real Turbulent and Perfect gas turbulent) of the Ma

  7. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 1; Steady Predictions

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; Ahuja, Vineet; Hosangadi, Ashvin

    2010-01-01

    levels in CFD based flowpath modeling of the facility. The analyses tools used here expand on the multi-element unstructured CFD which has been tailored and validated for impingement dynamics of dry plumes, complex valve/feed systems, and high pressure propellant delivery systems used in engine and component test stands at NASA SSC. The analyses performed in the evaluation of the sub-scale diffuser facility explored several important factors that influence modeling and understanding of facility operation such as (a) importance of modeling the facility with Real Gas approximation, (b) approximating the cluster of steam ejector nozzles as a single annular nozzle, (c) existence of mixed subsonic/supersonic flow downstream of the turning duct, and (d) inadequacy of two-equation turbulence models in predicting the correct pressurization in the turning duct and expansion of the second stage steam ejectors. The procedure used for modeling the facility was as follows: (i) The engine, test cell and first stage ejectors were simulated with an axisymmetric approximation (ii) the turning duct, second stage ejectors and the piping downstream of the second stage ejectors were analyzed with a three-dimensional simulation utilizing a half-plane symmetry approximation. The solution i.e. primitive variables such as pressure, velocity components, temperature and turbulence quantities were passed from the first computational domain and specified as a supersonic boundary condition for the second simulation. (iii) The third domain comprised of the exit diffuser and the region in the vicinity of the facility (primary included to get the correct shock structure at the exit of the facility and entrainment characteristics). The first set of simulations comprising the engine, test cell and first stage ejectors was carried out both as a turbulent real gas calculation as well as a turbulent perfect gas calculation. A comparison for the two cases (Real Turbulent and Perfect gas turbulent) of the Ma

  8. In-Memory Computing Architectures for Sparse Distributed Memory.

    PubMed

    Kang, Mingu; Shanbhag, Naresh R

    2016-08-01

    This paper presents an energy-efficient and high-throughput architecture for Sparse Distributed Memory (SDM)-a computational model of the human brain [1]. The proposed SDM architecture is based on the recently proposed in-memory computing kernel for machine learning applications called Compute Memory (CM) [2], [3]. CM achieves energy and throughput efficiencies by deeply embedding computation into the memory array. SDM-specific techniques such as hierarchical binary decision (HBD) are employed to reduce the delay and energy further. The CM-based SDM (CM-SDM) is a mixed-signal circuit, and hence circuit-aware behavioral, energy, and delay models in a 65 nm CMOS process are developed in order to predict system performance of SDM architectures in the auto- and hetero-associative modes. The delay and energy models indicate that CM-SDM, in general, can achieve up to 25 × and 12 × delay and energy reduction, respectively, over conventional SDM. When classifying 16 × 16 binary images with high noise levels (input bad pixel ratios: 15%-25%) into nine classes, all SDM architectures are able to generate output bad pixel ratios (Bo) ≤ 2%. The CM-SDM exhibits negligible loss in accuracy, i.e., its Bo degradation is within 0.4% as compared to that of the conventional SDM. PMID:27305686

  9. Overset grid applications on distributed memory MIMD computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana; Weeratunga, Sisira

    1994-01-01

    Analysis of modern aerospace vehicles requires the computation of flowfields about complex three dimensional geometries composed of regions with varying spatial resolution requirements. Overset grid methods allow the use of proven structured grid flow solvers to address the twin issues of geometrical complexity and the resolution variation by decomposing the complex physical domain into a collection of overlapping subdomains. This flexibility is accompanied by the need for irregular intergrid boundary communication among the overlapping component grids. This study investigates a strategy for implementing such a static overset grid implicit flow solver on distributed memory, MIMD computers; i.e., the 128 node Intel iPSC/860 and the 208 node Intel Paragon. Performance data for two composite grid configurations characteristic of those encountered in present day aerodynamic analysis are also presented.

  10. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  11. Accommodating Heterogeneity in a Debugger for Distributed Computations

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Cheng, Doreen; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In an ongoing project at NASA Ames Research Center, we are building debugger for distributed computations running on a heterogeneous set of machines. Historically, such debuggers have been built as front-ends to existing source-level debuggers on the target platforms. In effect, these back-end debuggers are providing a collection of debugger services to a client. The major drawback is that because of inconsistencies among the back-end debuggers, the front-end must use a different protocol when talking to each back-end debugger. This can make the front-end quite complex. We have avoided this complexity problem by defining the client-server debugger protocol. While it does require vendors to adapt their existing debugger code to meet the protocol, vendors are generally interested in doing so because the approach has several advantages. In addition to solving the heterogenous platform debugging problem, it will be possible to write interesting debugger user interfaces that can be easily ported across a variety of machines. This will likely encourage investment in application-domain specific debuggers. In fact, the user interface of our debugger will be geared to scientists developing computational fluid dynamics codes. This paper describes some of the problems encountered in developing a portable debugger for heterogenous, distributed computing and how the architecture of our debugger avoids them. It then provides a detailed description of the debugger client-server protocol. Some of the more interesting attributes of the protocol are: (1) It is object-oriented; (2) It uses callback functions to capture the asynchronous nature of debugging in a procedural fashion; (3) It contains abstractions, such as in-line instrumentation, for the debugging of computationally intensive programs; (4) For remote debugging, it has operations that enable the implementor to optimize message passing traffic between client and server. The soundness of the protocol is being tested through

  12. A scalable parallel graph coloring algorithm for distributed memory computers.

    SciTech Connect

    Bozdag, Doruk; Manne, Fredrik; Gebremedhin, Assefaw H.; Catalyurek, Umit; Boman, Erik Gunnar

    2005-02-01

    In large-scale parallel applications a graph coloring is often carried out to schedule computational tasks. In this paper, we describe a new distributed memory algorithm for doing the coloring itself in parallel. The algorithm operates in an iterative fashion; in each round vertices are speculatively colored based on limited information, and then a set of incorrectly colored vertices, to be recolored in the next round, is identified. Parallel speedup is achieved in part by reducing the frequency of communication among processors. Experimental results on a PC cluster using up to 16 processors show that the algorithm is scalable.

  13. Job monitoring on DIRAC for Belle II distributed computing

    NASA Astrophysics Data System (ADS)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  14. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    NASA Technical Reports Server (NTRS)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and

  15. Performance Evaluation of Three Distributed Computing Environments for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Weeratunga, Sisira; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    We present performance results for three distributed computing environments using the three simulated CFD applications in the NAS Parallel Benchmark suite. These environments are the DCF cluster, the LACE cluster, and an Intel iPSC/860 machine. The DCF is a prototypic cluster of loosely coupled SGI R3000 machines connected by Ethernet. The LACE cluster is a tightly coupled cluster of 32 IBM RS6000/560 machines connected by Ethernet as well as by either FDDI or an IBM Allnode switch. Results of several parallel algorithms for the three simulated applications are presented and analyzed based on the interplay between the communication requirements of an algorithm and the characteristics of the communication network of a distributed system.

  16. Fault Diagnosis in a Fully Distributed Local Computer Network.

    NASA Astrophysics Data System (ADS)

    Kwag, Hye Keun

    Local computer networks are being installed in diverse application areas. Many of the networks employ a distributed control scheme, which has advantages in performance and reliability over a centralized one. However, distribution of control increases the difficulty in locating faulty hardware elements. Consequently, advantages may not be fully realized unless measures are taken to account for the difficulties of fault diagnosis; yet, not much work has been done in this area. A hardcore is defined as a node or a part of a node which is fault-free and which can diagnose other elements in a system. Faults are diagnosed in most existing distributed local computer networks by assuming that every node, or a part of every node, is a fixed hardcore: a fixed node or a part of a fixed node is always a hardcore. Maintaining such high reliability may not be possible or cost-effective for some systems. A distributed network contains dynamically redundant elements, and it is reasonable to assume that fewer nodes are simultaneously faulty than are fault-free at any point in the life cycle of the network. A diagnostic model is proposed herein which determines bindary evaluation results according to the status of the testing and tested nodes, and which leads the network to dynamically locate a fault-free node (a hardcore). This diagnostic model is, in most cases, simpler to implement and more cost-effective than the fixed hardcore. The selected hardcore can diagnose the other elements and can locate permanent faults. In a hop-by-hop test, the destination node and every intermediate node in a path test the transmitted data. This dissertation presents another method to locate an element with frequent transient faults; it checks data only at the destination, thereby, eliminating the need for a hop-by-hop test.

  17. A digital computer propulsion control facility: Description of capabilities and summary of experimental program results

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Arpasi, D. J.; Lehtinen, B.

    1976-01-01

    Flight weight digital computers are being used today to carry out many of the propulsion system control functions previously delegated exclusively to hydromechanical controllers. An operational digital computer facility for propulsion control mode studies has been used successfully in several experimental programs. This paper describes the system and some of the results concerned with engine control, inlet control, and inlet engine integrated control. Analytical designs for the digital propulsion control modes include both classical and modern/optimal techniques.

  18. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  19. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  20. Computer Use and CAD in Assisting Schools in the Creation of Facilities.

    ERIC Educational Resources Information Center

    Beach, Robert H.; Essex, Nathan

    1987-01-01

    Computer-aided design (CAD) programs are powerful drafting tools, but are also able to assist with many other facility planning functions. Describes the hardware, software, and the learning process that led to understanding the CAD software at the University of Alabama. (MLF)

  1. 300 Area Treated Effluent Disposal Facility computer software release cover sheet and revision record

    SciTech Connect

    McCarthy, R.J.

    1994-11-28

    This supporting document contains the computer software release cover sheet and revision records for the 300 Area Treated Effluent Disposal Facility (TEDF). The previous revision was controlled by CH2M Hill which developed the software. A 7-page listing of the contents of directory C:{backslash}TEDF is contained in this report.

  2. GAiN: Distributed Array Computation with Python

    SciTech Connect

    Daily, Jeffrey A.

    2009-05-01

    Scientific computing makes use of very large, multidimensional numerical arrays - typically, gigabytes to terabytes in size - much larger than can fit on even the largest single compute node. Such arrays must be distributed across a "cluster" of nodes. Global Arrays is a cluster-based software system from Battelle Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate these arrays. Written in and for the C and FORTRAN programming languages, it takes advantage of high-performance cluster interconnections to allow any node in the cluster to access data on any other node very rapidly. The "numpy" module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. numpy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, numpy is inherently serial. Our system, GAiN (Global Arrays in NumPy), is a parallel extension to Python that accesses Global Arrays through numpy. This allows parallel processing and/or larger problem sizes to be harnessed almost transparently within new or existing numpy programs.

  3. A Riemannian framework for orientation distribution function computing.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2009-01-01

    Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation. PMID:20426075

  4. Facility management of computer-aided design, drafting/manufacturing systems (CADD/M)

    SciTech Connect

    Norton, F.J.

    1980-09-23

    Interactive Computer-Aided Design Drafting/Manufacturing systems have been installed in thousands of companies, applying CADD/M capabilities to many applications. This has been done with varying degrees of success even among companies with identical applications. Investigation of individual companies reveals a gap between the capabilities of CADD/M systems and the actual usage by industry of those capabilities. This company usage often determines the degree of success or failure of an interactive graphics facility and is largely controlled by management. The responsibilities of the interactive graphics facility managemant team are explained in detail. Proper management of a CADD/M facility is more critical to the success or failure of the facility than any other factor.

  5. Beyond input-output computings: error-driven emergence with parallel non-distributed slime mold computer.

    PubMed

    Aono, Masashi; Gunji, Yukio-Pegio

    2003-10-01

    The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy. PMID:14563567

  6. 41 CFR 101-26.503 - Multiple award schedule purchases made by GSA supply distribution facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Multiple award schedule purchases made by GSA supply distribution facilities. 101-26.503 Section 101-26.503 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT...

  7. Evaluation of Secure Computation in a Distributed Healthcare Setting.

    PubMed

    Kimura, Eizen; Hamada, Koki; Kikuchi, Ryo; Chida, Koji; Okamoto, Kazuya; Manabe, Shirou; Kuroda, Tomohiko; Matsumura, Yasushi; Takeda, Toshihiro; Mihara, Naoki

    2016-01-01

    Issues related to ensuring patient privacy and data ownership in clinical repositories prevent the growth of translational research. Previous studies have used an aggregator agent to obscure clinical repositories from the data user, and to ensure the privacy of output using statistical disclosure control. However, there remain several issues that must be considered. One such issue is that a data breach may occur when multiple nodes conspire. Another is that the agent may eavesdrop on or leak a user's queries and their results. We have implemented a secure computing method so that the data used by each party can be kept confidential even if all of the other parties conspire to crack the data. We deployed our implementation at three geographically distributed nodes connected to a high-speed layer two network. The performance of our method, with respect to processing times, suggests suitability for practical use. PMID:27577361

  8. Research into display sharing techniques for distributed computing environments

    NASA Technical Reports Server (NTRS)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  9. CORBA-Based Distributed Software Framework for the NIF Integrated Computer Control System

    SciTech Connect

    Stout, E A; Carey, R W; Estes, C M; Fisher, J M; Lagin, L J; Mathisen, D G; Reynolds, C A; Sanchez, R J

    2007-11-20

    The National Ignition Facility (NIF), currently under construction at the Lawrence Livermore National Laboratory, is a stadium-sized facility containing a 192-beam, 1.8 Megajoule, 500-Terawatt, ultra-violet laser system together with a 10-meter diameter target chamber with room for nearly 100 experimental diagnostics. The NIF is operated by the Integrated Computer Control System (ICCS) which is a scalable, framework-based control system distributed over 800 computers throughout the NIF. The framework provides templates and services at multiple levels of abstraction for the construction of software applications that communicate via CORBA (Common Object Request Broker Architecture). Object-oriented software design patterns are implemented as templates and extended by application software. Developers extend the framework base classes to model the numerous physical control points and implement specializations of common application behaviors. An estimated 140 thousand software objects, each individually addressable through CORBA, will be active at full scale. Many of these objects have persistent configuration information stored in a database. The configuration data is used to initialize the objects at system start-up. Centralized server programs that implement events, alerts, reservations, data archival, name service, data access, and process management provide common system wide services. At the highest level, a model-driven, distributed shot automation system provides a flexible and scalable framework for automatic sequencing of work-flow for control and monitoring of NIF shots. The shot model, in conjunction with data defining the parameters and goals of an experiment, describes the steps to be performed by each subsystem in order to prepare for and fire a NIF shot. Status and usage of this distributed framework are described.

  10. A Fruitful Collaboration between ESO and the Max Planck Computing and Data Facility

    NASA Astrophysics Data System (ADS)

    Fourniol, N.; Zampieri, S.; Panea, M.

    2016-06-01

    The ESO Science Archive Facility (SAF), contains all La Silla Paranal Observatory raw data, as well as, more recently introduced, processed data created at ESO with state-of-the-art pipelines or returned by the astronomical community. The SAF has been established for over 20 years and its current holding exceeds 700 terabytes. An overview of the content of the SAF and the preservation of its content is provided. The latest development to ensure the preservation of the SAF data, provision of an independent backup copy of the whole SAF at the Max Planck Computing and Data Facility in Garching, is described.

  11. Toward unification of taxonomy databases in a distributed computer environment

    SciTech Connect

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomy databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.

  12. Distributed computer system enhances productivity for SRB joint optimization

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1987-01-01

    Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.

  13. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  14. POET (parallel object-oriented environment and toolkit) and frameworks for scientific distributed computing

    SciTech Connect

    Armstrong, R.; Cheung, A.

    1997-01-01

    Frameworks for parallel computing have recently become popular as a means for preserving parallel algorithms as reusable components. Frameworks for parallel computing in general, and POET in particular, focus on finding ways to orchestrate and facilitate cooperation between components that implement the parallel algorithms. Since performance is a key requirement for POET applications, CORBA or CORBA-like systems are eschewed for a SPMD message-passing architecture common to the world of distributed-parallel computing. Though the system is written in C++ for portability, the behavior of POET is more like a classical framework, such as Smalltalk. POET seeks to be a general platform for scientific parallel algorithm components which can be modified, linked, mixed and matched to a user`s specification. The purpose of this work is to identify a means for parallel code reuse and to make parallel computing more accessible to scientists whose expertise is outside the field of parallel computing. The POET framework provides two things: (1) an object model for parallel components that allows cooperation without being restrictive; (2) services that allow components to access and manage user data and message-passing facilities, etc. This work has evolved through application of a series of real distributed-parallel scientific problems. The paper focuses on what is required for parallel components to cooperate and at the same time remain ``black-boxes`` that users can drop into the frame without having to know the exquisite details of message-passing, data layout, etc. The paper walks through a specific example of a chemically reacting flow application. The example is implemented in POET and the authors identify component cooperation, usability and reusability in an anecdotal fashion.

  15. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.

    2012-12-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.

  16. COMPUTER MODEL OF TEMPERATURE DISTRIBUTION IN OPTICALLY PUMPED LASER RODS

    NASA Technical Reports Server (NTRS)

    Farrukh, U. O.

    1994-01-01

    Managing the thermal energy that accumulates within a solid-state laser material under active pumping is of critical importance in the design of laser systems. Earlier models that calculated the temperature distribution in laser rods were single dimensional and assumed laser rods of infinite length. This program presents a new model which solves the temperature distribution problem for finite dimensional laser rods and calculates both the radial and axial components of temperature distribution in these rods. The modeled rod is either side-pumped or end-pumped by a continuous or a single pulse pump beam. (At the present time, the model cannot handle a multiple pulsed pump source.) The optical axis is assumed to be along the axis of the rod. The program also assumes that it is possible to cool different surfaces of the rod at different rates. The user defines the laser rod material characteristics, determines the types of cooling and pumping to be modeled, and selects the time frame desired via the input file. The program contains several self checking schemes to prevent overwriting memory blocks and to provide simple tracing of information in case of trouble. Output for the program consists of 1) an echo of the input file, 2) diffusion properties, radius and length, and time for each data block, 3) the radial increments from the center of the laser rod to the outer edge of the laser rod, and 4) the axial increments from the front of the laser rod to the other end of the rod. This program was written in Microsoft FORTRAN77 and implemented on a Tandon AT with a 287 math coprocessor. The program can also run on a VAX 750 mini-computer. It has a memory requirement of about 147 KB and was developed in 1989.

  17. Power Hardware-in-the-Loop (PHIL) Testing Facility for Distributed Energy Storage (Poster)

    SciTech Connect

    Neubauer.J.; Lundstrom, B.; Simpson, M.; Pratt, A.

    2014-06-01

    The growing deployment of distributed, variable generation and evolving end-user load profiles presents a unique set of challenges to grid operators responsible for providing reliable and high quality electrical service. Mass deployment of distributed energy storage systems (DESS) has the potential to solve many of the associated integration issues while offering reliability and energy security benefits other solutions cannot. However, tools to develop, optimize, and validate DESS control strategies and hardware are in short supply. To fill this gap, NREL has constructed a power hardware-in-the-loop (PHIL) test facility that connects DESS, grid simulator, and load bank hardware to a distribution feeder simulation.

  18. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    SciTech Connect

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  19. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  20. Computational investigations of low-emission burner facilities for char gas burning in a power boiler

    NASA Astrophysics Data System (ADS)

    Roslyakov, P. V.; Morozov, I. V.; Zaychenko, M. N.; Sidorkin, V. T.

    2016-04-01

    Various variants for the structure of low-emission burner facilities, which are meant for char gas burning in an operating TP-101 boiler of the Estonia power plant, are considered. The planned increase in volumes of shale reprocessing and, correspondingly, a rise in char gas volumes cause the necessity in their cocombustion. In this connection, there was a need to develop a burner facility with a given capacity, which yields effective char gas burning with the fulfillment of reliability and environmental requirements. For this purpose, the burner structure base was based on the staging burning of fuel with the gas recirculation. As a result of the preliminary analysis of possible structure variants, three types of early well-operated burner facilities were chosen: vortex burner with the supply of recirculation gases into the secondary air, vortex burner with the baffle supply of recirculation gases between flows of the primary and secondary air, and burner facility with the vortex pilot burner. Optimum structural characteristics and operation parameters were determined using numerical experiments. These experiments using ANSYS CFX bundled software of computational hydrodynamics were carried out with simulation of mixing, ignition, and burning of char gas. Numerical experiments determined the structural and operation parameters, which gave effective char gas burning and corresponded to required environmental standard on nitrogen oxide emission, for every type of the burner facility. The burner facility for char gas burning with the pilot diffusion burner in the central part was developed and made subject to computation results. Preliminary verification nature tests on the TP-101 boiler showed that the actual content of nitrogen oxides in burner flames of char gas did not exceed a claimed concentration of 150 ppm (200 mg/m3).

  1. Maintenance of reactor safety and control computers at a large government facility

    SciTech Connect

    Brady, H G

    1985-01-01

    In 1950 the US Government contracted the Du Pont Company to design, build, and operate the Savannah River Plant (SRP). At the time, it was the largest construction project ever undertaken by man. It is still the largest of the Department of Energy facilities. In the nearly 35 years that have elapsed, Du Pont has met its commitments to the US Government and set world safety records in the construction and operation of nuclear facilities. Contributing factors in achieving production goals and setting the safety records are a staff of highly qualified personnel, a well maintained plant, and sound maintenance programs. There have been many ''first ever'' achievements at SRP. These ''firsts'' include: (1) computer control of a nuclear rector, and (2) use of computer systems as safety circuits. This presentation discusses the maintenance program provided for these computer systems and all digital systems at SRP. An in-house computer maintenance program that was started in 1966 with five persons has grown to a staff of 40 with investments in computer hardware increasing from $4 million in 1970 to more than $60 million in this decade. 4 figs.

  2. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  3. National Ignition Facility computational fluid dynamics modeling and light fixture case studies

    SciTech Connect

    Martin, R.; Bernardin, J.; Parietti, L.; Dennison, B.

    1998-02-01

    This report serves as a guide to the use of computational fluid dynamics (CFD) as a design tool for the National Ignition Facility (NIF) program Title I and Title II design phases at Lawrence Livermore National Laboratory. In particular, this report provides general guidelines on the technical approach to performing and interpreting any and all CFD calculations. In addition, a complete CFD analysis is presented to illustrate these guidelines on a NIF-related thermal problem.

  4. 120. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    120. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "foundation & first floor plan" - structural, AS-BLT AW 35-46-04, sheet 65, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  5. 119. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    119. Back side technical facilities S.R. radar transmitter & computer building no. 102, section I "tower plan, sections & details" - structural, AS-BLT AW 35-46-04, sheet 62, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  6. 118. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    118. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 13, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  7. 122. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    122. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "elevations & details" - structural, AS-BLT AW 35-46-04, sheet 73, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  8. 117. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    117. Back side technical facilities S.R. radar transmitter & computer building no. 102, "building sections - sheet I" - architectural, AS-BLT AW 35-46-04, sheet 12, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  9. 121. Back side technical facilities S.R. radar transmitter & computer ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    121. Back side technical facilities S.R. radar transmitter & computer building no. 102, section II "sections & elevations" - structural, AS-BLT AW 35-46-04, sheet 72, dated 23 January, 1961. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  10. PHENIX On-Line Distributed Computing System Architecture

    SciTech Connect

    Desmond, Edmond; Haggerty, John; Kehayias, Hyon Joo; Purschke, Martin L.; Witzig, Chris; Kozlowski, Thomas

    1997-05-22

    PHENIX is one of the two large experiments at the Relativistic Heavy Ion Collider (RHIC) currently under construction at Brookhaven National Laboratory. The detector consists of 11 sub-detectors, that are further subdivided into 29 units (``granules``) that can be operated independently, which includes simultaneous data taking with independent data streams and independent triggers. The detector has 250,000 channels and is read out by front end modules, where the data is buffered in a pipeline while awaiting the level trigger decision. Zero suppression and calibration is done after the level accept in custom built data collection modules (DCMs) with DSPs before the data is sent to an event builder (design throughput of 2 Gb/sec) and higher level triggers. The On-line Computing Systems Group (ONCS) has two responsibilities. Firstly it is responsible for receiving the data from the event builder, routing it through a network of workstations to consumer processes and archiving it at a data rate of 20 MB/sec. Secondly it is also responsible for the overall configuration, control and operation of the detector and data acquisition chain, which comprises the software integration for several thousand custom built hardware modules. The software must furthermore support the independent operation of the above mentioned granules, which includes the coordination of processes that run in 60-100 VME processors and workstations. ONOS has adapted the Shlaer- Mellor Object Oriented Methodology for the design of the top layer software. CORBA is used as communication layer between the distributed objects, which are implemented as asynchronous finite state machines. We will give an overview of the PHENIX online system with the main focus on the system architecture, software components and integration tasks of the On-line Computing group ONCS and report on the status of the current prototypes.

  11. Maintaining Traceability in an Evolving Distributed Computing Environment

    NASA Astrophysics Data System (ADS)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  12. Health workers’ knowledge of and attitudes towards computer applications in rural African health facilities

    PubMed Central

    Sukums, Felix; Mensah, Nathan; Mpembeni, Rose; Kaltschmidt, Jens; Haefeli, Walter E.; Blank, Antje

    2014-01-01

    Background The QUALMAT (Quality of Maternal and Prenatal Care: Bridging the Know-do Gap) project has introduced an electronic clinical decision support system (CDSS) for pre-natal and maternal care services in rural primary health facilities in Burkina Faso, Ghana, and Tanzania. Objective To report an assessment of health providers’ computer knowledge, experience, and attitudes prior to the implementation of the QUALMAT electronic CDSS. Design A cross-sectional study was conducted with providers in 24 QUALMAT project sites. Information was collected using structured questionnaires. Chi-squared tests and one-way ANOVA describe the association between computer knowledge, attitudes, and other factors. Semi-structured interviews and focus groups were conducted to gain further insights. Results A total of 108 providers responded, 63% were from Tanzania and 37% from Ghana. The mean age was 37.6 years, and 79% were female. Only 40% had ever used computers, and 29% had prior computer training. About 80% were computer illiterate or beginners. Educational level, age, and years of work experience were significantly associated with computer knowledge (p<0.01). Most (95.3%) had positive attitudes towards computers – average score (±SD) of 37.2 (±4.9). Females had significantly lower scores than males. Interviews and group discussions showed that although most were lacking computer knowledge and experience, they were optimistic about overcoming challenges associated with the introduction of computers in their workplace. Conclusions Given the low levels of computer knowledge among rural health workers in Africa, it is important to provide adequate training and support to ensure the successful uptake of electronic CDSSs in these settings. The positive attitudes to computers found in this study underscore that also rural care providers are ready to use such technology. PMID:25361721

  13. Using mobile distributed pyrolysis facilities to deliver a forest residue resource for bio-fuel production

    NASA Astrophysics Data System (ADS)

    Brown, Duncan

    Distributed mobile conversion facilities using either fast pyrolysis or torrefaction processes can be used to convert forest residues to more energy dense substances (bio-oil, bio-slurry or torrefied wood) that can be transported as feedstock for bio-fuel facilities. All feedstock are suited for gasification, which produces syngas that can be used to synthesise petrol or diesel via Fischer-Tropsch reactions, or produce hydrogen via water gas shift reactions. Alternatively, the bio-oil product of fast pyrolysis may be upgraded to produce petrol and diesel, or can undergo steam reformation to produce hydrogen. Implementing a network of mobile facilities reduces the energy content of forest residues delivered to a bio-fuel facility as mobile facilities use a fraction of the biomass energy content to meet thermal or electrical demands. The total energy delivered by bio-oil, bio-slurry and torrefied wood is 45%, 65% and 87% of the initial forest residue energy content, respectively. However, implementing mobile facilities is economically feasible when large transport distances are required. For an annual harvest of 1.717 million m3 (equivalent to 2000 ODTPD), transport costs are reduced to less than 40% of the total levelised delivered feedstock cost when mobile facilities are implemented; transport costs account for up to 80% of feedstock costs for conventional woodchip delivery. Torrefaction provides the lowest cost pathway of delivering a forest residue resource when using mobile facilities. Cost savings occur against woodchip delivery for annual forest residue harvests above 2.25 million m3 or when transport distances greater than 250 km are required. Important parameters that influence levelised delivered costs of feedstock are transport distances (forest residue spatial density), haul cost factors, thermal and electrical demands of mobile facilities, and initial moisture content of forest residues. Relocating mobile facilities can be optimised for lowest cost

  14. An environmental testing facility for Space Station Freedom power management and distribution hardware

    NASA Technical Reports Server (NTRS)

    Jackola, Arthur S.; Hartjen, Gary L.

    1992-01-01

    The plans for a new test facility, including new environmental test systems, which are presently under construction, and the major environmental Test Support Equipment (TSE) used therein are addressed. This all-new Rocketdyne facility will perform space simulation environmental tests on Power Management and Distribution (PMAD) hardware to Space Station Freedom (SSF) at the Engineering Model, Qualification Model, and Flight Model levels of fidelity. Testing will include Random Vibration in three axes - Thermal Vacuum, Thermal Cycling and Thermal Burn-in - as well as numerous electrical functional tests. The facility is designed to support a relatively high throughput of hardware under test, while maintaining the high standards required for a man-rated space program.

  15. Above the cloud computing orbital services distributed data model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  16. Automatic distribution of vision-tasks on computing clusters

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Tran, Binh An; Knoll, Alois

    2011-01-01

    In this paper a consistent and efficient but yet convenient system for parallel computer vision, and in fact also realtime actuator control is proposed. The system implements the multi-agent paradigm and a blackboard information storage. This, in combination with a generic interface for hardware abstraction and integration of external software components, is setup on basis of the message passing interface (MPI). The system allows for data- and task-parallel processing, and supports both synchronous communication, as data exchange can be triggered by events, and asynchronous communication, as data can be polled, strategies. Also, by duplication of processing units (agents) redundant processing is possible to achieve greater robustness. As the system automatically distributes the task units to available resources, and a monitoring concept allows for combination of tasks and their composition to complex processes, it is easy to develop efficient parallel vision / robotics applications quickly. Multiple vision based applications have already been implemented, including academic, research related fields and prototypes for industrial automation. For the scientific community the system has been recently launched open-source.

  17. Computational spectroscopy using the Quantum ESPRESSO distribution (Invited)

    NASA Astrophysics Data System (ADS)

    Baroni, S.; Giannozzi, P.

    2009-12-01

    Quantum ESPRESSO (QE) [1,2] is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials. QE freely available to researchers around the world under the terms of the GNU general public licence. In this talk I will introduce the QE distribution, with emphasis on some of its features that may appeal to the Earth Sciences and Mineralogy communities. I will focus on the determination of vibrational frequencies to be used for spectroscopic purposes, for the determination of soft modes leading to mechanical instabilities, and as ingredients for the simulation of thermal properties in the (quasi-) harmonic approximations. I will conclude with some recent developments which are allowing for the simulation of electronic (absorption and photo-emission) spectroscopies, using many-body and time-dependent density-functional perturbation theories. [1] P. Giannozzi et al. J. Phys.: Condens. Matter 21, 395502 (2009); http://dx.doi.org/10.1088/0953-8984/21/39/395502 [2] http://www.quantum-espresso.org

  18. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect

    Johnson, A.J.

    1991-12-01

    Fiber Distributed Data Interface, more commonly known as FDDI, is the name of the standard that describes a new local area network (LAN) technology for the 90`s. This technology is based on fiber optics communications and, at a data transmission rate of 100 million bits per second (mbps), provides a full order of magnitude improvement over previous LAN standards such as Ethernet and Token Ring. FDDI as a standard has been accepted by all major computer manufacturers and is a national standard as defined by the American National Standards Institute (ANSI). FDDI will become part of the US Government Open Systems Interconnection Profile (GOSIP) under Version 3 GOSIP and will become an international standard promoted by the International Standards Organization (ISO). It is important to note that there are no competing standards for high performance LAN`s so that FDDI acceptance is nearly universal. This technology report describes FDDI as a technology, looks at the applications of this technology, examine the current economics of using it, and describe activities and plans by the Information Resource Management (IRM) department to implement this technology at the Savannah River Site.

  19. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect

    Johnson, A.J.

    1991-12-01

    Fiber Distributed Data Interface, more commonly known as FDDI, is the name of the standard that describes a new local area network (LAN) technology for the 90's. This technology is based on fiber optics communications and, at a data transmission rate of 100 million bits per second (mbps), provides a full order of magnitude improvement over previous LAN standards such as Ethernet and Token Ring. FDDI as a standard has been accepted by all major computer manufacturers and is a national standard as defined by the American National Standards Institute (ANSI). FDDI will become part of the US Government Open Systems Interconnection Profile (GOSIP) under Version 3 GOSIP and will become an international standard promoted by the International Standards Organization (ISO). It is important to note that there are no competing standards for high performance LAN's so that FDDI acceptance is nearly universal. This technology report describes FDDI as a technology, looks at the applications of this technology, examine the current economics of using it, and describe activities and plans by the Information Resource Management (IRM) department to implement this technology at the Savannah River Site.

  20. Parallelizing Sylvester-like operations on a distributed memory computer

    SciTech Connect

    Hu, D.Y.; Sorensen, D.C.

    1994-12-31

    Discretization of linear operators arising in applied mathematics often leads to matrices with the following structure: M(x) = (D {circle_times} A + B {circle_times} I{sub n} + V)x, where x {element_of} R{sup mn}, B, D {element_of} R{sup nxn}, A {element_of} R{sup mxm} and V {element_of} R{sup mnxmn}; both D and V are diagonal. For the notational convenience, the authors assume that both A and B are symmetric. All the results through this paper can be easily extended to the cases with general A and B. The linear operator on R{sup mn} defined above can be viewed as a generalization of the Sylvester operator: S(x) = (I{sub m} {circle_times} A + B {circle_times} I{sub n})x. The authors therefore refer to it as a Sylvester-like operator. The schemes discussed in this paper therefore also apply to Sylvester operator. In this paper, the authors present the SIMD scheme for parallelization of the Sylvester-like operator on a distributed memory computer. This scheme is designed to approach the best possible efficiency by avoiding unnecessary communication among processors.

  1. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  2. A facile synthesis of Tenanoparticles with binary size distribution by green chemistry

    NASA Astrophysics Data System (ADS)

    He, Weidong; Krejci, Alex; Lin, Junhao; Osmulski, Max E.; Dickerson, James H.

    2011-04-01

    Our work reports a facile route to colloidal Tenanocrystals with binary uniform size distributions at room temperature. The binary-sized Tenanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated.Our work reports a facile route to colloidal Tenanocrystals with binary uniform size distributions at room temperature. The binary-sized Tenanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated. Electronic supplementary information (ESI) available: Synthetic procedures, FTIR analysis, ED pattern, AFM image, and EPD current curve. See DOI: 10.1039/c1nr10025d

  3. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  4. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  5. Computer mapping and visualization of facilities for planning of D and D operations

    SciTech Connect

    Wuller, C.E.; Gelb, G.H.; Cramond, R.; Cracraft, J.S.

    1995-12-31

    The lack of as-built drawings for many old nuclear facilities impedes planning for decontamination and decommissioning. Traditional manual walkdowns subject workers to lengthy exposure to radiological and other hazards. The authors have applied close-range photogrammetry, 3D solid modeling, computer graphics, database management, and virtual reality technologies to create geometrically accurate 3D computer models of the interiors of facilities. The required input to the process is a set of photographs that can be acquired in a brief time. They fit 3D primitive shapes to objects of interest in the photos and, at the same time, record attributes such as material type and link patches of texture from the source photos to facets of modeled objects. When they render the model as either static images or at video rates for a walk-through simulation, the phototextures are warped onto the objects, giving a photo-realistic impression. The authors have exported the data to commercial CAD, cost estimating, robotic simulation, and plant design applications. Results from several projects at old nuclear facilities are discussed.

  6. Evaluation of DEC`s GIGAswitch for distributed parallel computing

    SciTech Connect

    Chen, H.; Hutchins, J.; Brandt, J.

    1993-10-01

    One of Sandia`s research efforts is to reduce the end-to-end communication delay in a parallel-distributed computing environment. GIGAswitch is DEC`s implementation of a gigabit local area network based on switched FDDI technology. Using the GIGAswitch, the authors intend to minimize the medium access latency suffered by shared-medium FDDI technology. Experimental results show that the GIGAswitch adds 16.5 microseconds of switching and bridging delay to an end-to-end communication. Although the added latency causes a 1.8% throughput degradation and a 5% line efficiency degradation, the availability of dedicated bandwidth is much more than what is available to a workstation on a shared medium. For example, ten directly connected workstations each would have a dedicated bandwidth of 95 Mbps, but if they were sharing the FDDI bandwidth, each would have 10% of the total bandwidth, i.e., less than 10 Mbps. In addition, they have found that when there is no output port contention, the switch`s aggregate bandwidth will scale up to multiples of its port bandwidth. However, with output port contention, the throughput and latency performance suffered significantly. Their mathematical and simulation models indicate that the GIGAswitch line efficiency could be as low as 63% when there are nine input ports contending for the same output port. The data indicate that the delay introduced by contention at the server workstation is 50 times that introduced by the GIGAswitch. The authors conclude that the GIGAswitch meets the performance requirements of today`s high-end workstations and that the switched FDDI technology provides an alternative that utilizes existing workstation interfaces while increasing the aggregate bandwidth. However, because the speed of workstations is increasing by a factor of 2 every 1.5 years, the switched FDDI technology is only good as an interim solution.

  7. Asteroids@home-A BOINC distributed computing project for asteroid shape reconstruction

    NASA Astrophysics Data System (ADS)

    Ďurech, J.; Hanuš, J.; Vančo, R.

    2015-11-01

    We present the project Asteroids@home that uses distributed computing to solve the time-consuming inverse problem of shape reconstruction of asteroids. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework to distribute, collect, and validate small computational units that are solved independently at individual computers of volunteers connected to the project. Shapes, rotational periods, and orientations of the spin axes of asteroids are reconstructed from their disk-integrated photometry by the lightcurve inversion method.

  8. Experimental facility for two- and three-dimensional ultrafast electron beam x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Stürzel, T.; Bieberle, M.; Laurien, E.; Hampel, U.; Barthel, F.; Menz, H.-J.; Mayer, H.-G.

    2011-02-01

    An experimental facility is described, which has been designed to perform ultrafast two-dimensional (2D) and three-dimensional (3D) electron beam computed tomographies. As a novelty, a specially designed transparent target enables tomography with no axial offset for 2D imaging and high axial resolution 3D imaging employing the cone-beam tomography principles. The imaging speed is 10 000 frames per second for planar scanning and more than 1000 frames per second for 3D imaging. The facility serves a broad spectrum of potential applications; primarily, the study of multiphase flows, but also in principle nondestructive testing or small animal imaging. In order to demonstrate the aptitude for these applications, static phantom experiments at a frame rate of 2000 frames per second were performed. Resulting spatial resolution was found to be 1.2 mm and better for a reduced temporal resolution.

  9. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  10. Access Control for Agent-based Computing: A Distributed Approach.

    ERIC Educational Resources Information Center

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  11. Computer software design description for the Treated Effluent Disposal Facility (TEDF), Project L-045H, Operator Training Station (OTS)

    SciTech Connect

    Carter, R.L. Jr.

    1994-11-07

    The Treated Effluent Disposal Facility (TEDF) Operator Training Station (OTS) is a computer-based training tool designed to aid plant operations and engineering staff in familiarizing themselves with the TEDF Central Control System (CCS).

  12. Impact of Distributed Energy Resources on the Reliability of a Critical Telecommunications Facility

    SciTech Connect

    Robinson, D.; Atcitty, C.; Zuffranieri, J.; Arent, D.

    2006-03-01

    Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages, as documented by analyses of Federal Communications Commission (FCC) outage reports by the National Reliability Steering Committee (under auspices of the Alliance for Telecommunications Industry Solutions). There are two major issues that are having increasing impact on the sensitivity of the power distribution to telecommunication facilities: deregulation of the power industry, and changing weather patterns. A logical approach to improve the robustness of telecommunication facilities would be to increase the depth and breadth of technologies available to restore power in the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. But does the diversity in power sources actually increase the reliability of offered power to the office equipment, or does the complexity of installing and managing the extended power system induce more potential faults and higher failure rates? This report analyzes a system involving a telecommunications facility consisting of two switch-bays and a satellite reception system.

  13. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  14. Model of the reliability analysis of the distributed computer systems with architecture "client-server"

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu

    2015-01-01

    The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.

  15. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX.

    SciTech Connect

    Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division

    2009-06-09

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the

  16. Impact of Distributed Energy Resources on the Reliability of Critical Telecommunications Facilities: Preprint

    SciTech Connect

    Robinson, D. G.; Arent, D. J.; Johnson, L.

    2006-06-01

    This paper documents a probabilistic risk assessment of existing and alternative power supply systems at a large telecommunications office. The analysis characterizes the increase in the reliability of power supply through the use of two alternative power configurations. Failures in the power systems supporting major telecommunications service nodes are a main contributor to significant telecommunications outages. A logical approach to improving the robustness of telecommunication facilities is to increase the depth and breadth of technologies available to restore power during power outages. Distributed energy resources such as fuel cells and gas turbines could provide additional on-site electric power sources to provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.

  17. Single-computer HWIL simulation facility for real-time vision systems

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Ernst D.

    1998-07-01

    UBM is working on autonomous vision systems for aircraft for more than one and a half decades by now. The systems developed use standard on-board sensors and two additional monochrome cameras for state estimation of the aircraft. A common task is to detect and track a runway for an autonomous landing approach. The cameras have different focal lengths and are mounted on a special pan and tilt camera platform. As the platform is equipped with two resolvers and two gyros it can be stabilized inertially and the system has the ability to actively focus on the objects of highest interest. For verification and testing, UBM has a special HWIL simulation facility for real-time vision systems. Central part of this simulation facility is a three axis motion simulator (DBS). It is used to realize the computed orientation in the rotational degrees of freedom of the aircraft. The two-axis camera platform with its two CCD-cameras is mounted on the inner frame of the DBS and is pointing at the cylindrical projection screen with a synthetic view displayed on it. As the performance of visual perception systems has increased significantly in recent years, a new, more powerful synthetic vision system was required. A single Onyx2 machine replaced all the former simulation computers. This computer is powerful enough to simulate the aircraft, to generate a high-resolution synthetic view, to control the DBS and to communicate with the image processing computers. Further improvements are the significantly reduced delay times for closed loop simulations and the elimination of communication overhead.

  18. Computer simulation of an alternate-energy-based, high-density brooding facility

    SciTech Connect

    Simmons, J.D.

    1986-01-01

    A computer model was developed to simulate a poultry brooding facility characterized by high-density cage or floor brooding, environmental housing, ventilation heat recovery, solar energy collection, and biogas generation. Repeated simulations revealed the following: (1) Solar collection and ventilation heat recovery could reduce fossil fuel use by 12 and 91%, respectively. Combining solar collection and heat recovery may reduce fossil fuel use by only an additional 1.5%. (2) Methane generation can provide more energy on a yearly basis than is required for supplemental heat for brooding. Seasonal energy demands do not match supplies from methane generation and shortages may occur in winter as well as excesses in summer. A digester operated in the thermophilic temperature range produces more net energy than one operated in the mesophilic range. (3) Operating expenses for the simulated cage facility exceeded conventional brooding. (4) Relative humidity patterns of certain areas create the need for complex controls to properly maintain the internal environment. (5) Feed and fuel account for nearly 100% of the operating expenses of brooding. Controlling heat and ventilation with a microprocessor may be the only way to optimize the environment of a broiler brooding facility.

  19. Computational methods for the control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Cliff, E. M.; Powers, R. K.

    1986-01-01

    Finite dimensional approximation schemes that work well for distributed parameter systems are often not suitable for the analysis and implementation of feedback control systems. The relationship between approximation schemes for distributed parameter systems and their application to optimal control problems is discussed. A numerical example is given.

  20. The Development of a Computer Assisted Distribution and Assignment (CADA) System for Navy Enlisted Personnel.

    ERIC Educational Resources Information Center

    Whitehead, Randall F.; And Others

    This report describes the development of a computerized system to assist Navy personnel managers in carrying out the functions associated with the distribution and assignment of enlisted personnel. This Computer Assisted Distribution and Assignment (CADA) System is aimed at the most efficient interaction between the computer and human manager to…

  1. High-performance, distributed computing software libraries and services

    Energy Science and Technology Software Center (ESTSC)

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  2. Overview of the human brain as a distributed computing network

    SciTech Connect

    Gevins, A.S.

    1983-01-01

    The hierarchically organized human brain is viewed as a prime example of a massively parallel, adaptive information processing and process control system. A brief overview of the human brain is provided for computer architects, in hopes that the principles of massive parallelism, dense connectivity and self-organization of assemblies of processing elements will prove relevant to the design of fifth generation VLSI computing networks. 6 references.

  3. Distributed data access in the LAMPF (Los Alamos Meson Physics Facility) control system

    SciTech Connect

    Schaller, S.C.; Bjorklund, E.A.

    1987-01-01

    We have extended the Los Alamos Meson Physics Facility (LAMPF) control system software to allow uniform access to data and controls throughout the control system network. Two aspects of this work are discussed here. Of primary interest is the use of standard interfaces and standard messages to allow uniform and easily expandable inter-node communication. A locally designed remote procedure call protocol will be described. Of further interest is the use of distributed databases to allow maximal hardware independence in the controls software. Application programs use local partial copies of the global device description database to resolve symbolic device names.

  4. An inverse method for computation of structural stiffness distributions of aeroelastically optimized wings

    NASA Technical Reports Server (NTRS)

    Schuster, David M.

    1993-01-01

    An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.

  5. An inverse method for computation of structural stiffness distributions of aeroelastically optimized wings

    NASA Astrophysics Data System (ADS)

    Schuster, David M.

    1993-04-01

    An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.

  6. Distributed Network, Wireless and Cloud Computing Enabled 3-D Ultrasound; a New Medical Technology Paradigm

    PubMed Central

    Meir, Arie; Rubinsky, Boris

    2009-01-01

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236

  7. Distributed network, wireless and cloud computing enabled 3-D ultrasound; a new medical technology paradigm.

    PubMed

    Meir, Arie; Rubinsky, Boris

    2009-01-01

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236

  8. Proceedings of the fifth IEEE international symposium on high performance distributed computing

    SciTech Connect

    1996-12-31

    This report contains papers from the Fifth IEEE International Symposium on High Performance Distributed Computing. Some of the areas covered are: collaboration tools (multimedia track); applications; distributed and parallel programming; metacomputing track; multimedia applications; tools and practice; networks for distributed applications; multimedia networks; languages and algorithms; networks of workstations; metacomputing track - invited papers; quality of service; distributed shared memory; networks and protocols; I/O systems and storage; wide-area distributed systems; communications - design and architecture; and parallel systems.

  9. Raman distributed temperature measurement at CERN high energy accelerator mixed field radiation test facility (CHARM)

    NASA Astrophysics Data System (ADS)

    Toccafondo, Iacopo; Nannipieri, Tiziano; Signorini, Alessandro; Guillermain, Elisa; Kuhnhenn, Jochen; Brugger, Markus; Di Pasquale, Fabrizio

    2015-09-01

    In this paper we present a validation of distributed Raman temperature sensing (RDTS) at the CERN high energy accelerator mixed field radiation test facility (CHARM), newly developed in order to qualify electronics for the challenging radiation environment of accelerators and connected high energy physics experiments. By investigating the effect of wavelength dependent radiation induced absorption (RIA) on the Raman Stokes and anti-Stokes light components in radiation tolerant Ge-doped multi-mode (MM) graded-index optical fibers, we demonstrate that Raman DTS used in loop configuration is robust to harsh environments in which the fiber is exposed to a mixed radiation field. The temperature profiles measured on commercial Ge-doped optical fibers is fully reliable and therefore, can be used to correct the RIA temperature dependence in distributed radiation sensing systems based on P-doped optical fibers.

  10. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  11. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  12. GASFLOW: A computational model to analyze accidents in nuclear containment and facility buildings

    SciTech Connect

    Travis, J.R. ); Nichols, B.D.; Wilson, T.L.; Lam, K.L.; Spore, J.W.; Niederauer, G.F. )

    1993-01-01

    GASFLOW is a finite-volume computer code that solves the time-dependent, compressible Navier-Stokes equations for multiple gas species. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting liquids or gases to simulate diffusion or propagating flames in complex geometries of nuclear containment or confinement and facilities' buildings. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. The ventilation system may consist of extensive ductwork, filters, dampers or valves, and fans. Condensation and heat transfer to walls, floors, ceilings, and internal structures are calculated to model the appropriate energy sinks. Solid and liquid aerosol behavior is simulated to give the time and space inventory of radionuclides. The solution procedure of the governing equations is a modified Los Alamos ICE'd-ALE methodology. Complex facilities can be represented by separate computational domains (multiblocks) that communicate through overlapping boundary conditions. The ventilation system is superimposed throughout the multiblock mesh. Gas mixtures and aerosols are transported through the free three-dimensional volumes and the restricted one-dimensional ventilation components as the accident and fluid flow fields evolve. Combustion may occur if sufficient fuel and reactant or oxidizer are present and have an ignition source. Pressure and thermal loads on the building, structural components, and safety-related equipment can be determined for specific accident scenarios. GASFLOW calculations have been compared with large oil-pool fire tests in the 1986 HDR containment test T52.14, which is a 3000-kW fire experiment. The computed results are in good agreement with the observed data.

  13. Spatially Resolved Temperature and Water Vapor Concentration Distributions in Supersonic Combustion Facilities by TDLAT

    NASA Technical Reports Server (NTRS)

    Busa, K. M.; McDaniel J. C.; Diskin, G. S.; DePiro, M. J.; Capriotti, D. P.; Gaffney, R. L.

    2012-01-01

    Detailed knowledge of the internal structure of high-enthalpy flows can provide valuable insight to the performance of scramjet combustors. Tunable Diode Laser Absorption Spectroscopy (TDLAS) is often employed to measure temperature and species concentration. However, TDLAS is a path-integrated line-of-sight (LOS) measurement, and thus does not produce spatially resolved distributions. Tunable Diode Laser Absorption Tomography (TDLAT) is a non-intrusive measurement technique for determining two-dimensional spatially resolved distributions of temperature and species concentration in high enthalpy flows. TDLAT combines TDLAS with tomographic image reconstruction. More than 2500 separate line-of-sight TDLAS measurements are analyzed in order to produce highly resolved temperature and species concentration distributions. Measurements have been collected at the University of Virginia's Supersonic Combustion Facility (UVaSCF) as well as at the NASA Langley Direct-Connect Supersonic Combustion Test Facility (DCSCTF). Due to the UVaSCF s unique electrical heating and ability for vitiate addition, measurements collected at the UVaSCF are presented as a calibration of the technique. Measurements collected at the DCSCTF required significant modifications to system hardware and software designs due to its larger measurement area and shorter test duration. Tomographic temperature and water vapor concentration distributions are presented from experimentation on the UVaSCF operating at a high temperature non-reacting case for water vitiation level of 12%. Initial LOS measurements from the NASA Langley DCSCTF operating at an equivalence ratio of 0.5 are also presented. Results show the capability of TDLAT to adapt to several experimental setups and test parameters.

  14. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560

  15. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  16. Concentration profiles and spatial distribution of perfluoroalkyl substances in an industrial center with condensed fluorochemical facilities.

    PubMed

    Shan, Guoqiang; Wei, Mingcui; Zhu, Lingyan; Liu, Zhengtao; Zhang, Yahui

    2014-08-15

    Jiangsu Hi-tech Fluorochemical Industry Park, China, is one of the largest fluorochemical industry centers in Asia and could be a point source of polyfluoroalkyl substances (PFASs) to the surrounding environment. Besides water, sediment and soil samples, tree leaves and bark were also collected to monitor airborne PFASs around the facilities. Perfluorooctanoic acid and short-chain perfluorocarboxylates including perfluorohexanoic acid and perfluoropentanoic acid were found predominantly in all the samples. The target ∑PFASs were distributed in the dissolved phase with a proportion of 96.5±2.9%. High concentrations of ∑PFASs (up to 12,700 ng/L in surface water) were found at sites near and within the wastewater treatment plant and the facilities. The ∑PFASs in the sediment/sludge were in the range of 3.33-324 ng/g dw. For the first time, tree samples were used for bio-monitoring airborne PFASs in the environment. The ∑PFASs in the tree leaf and bark samples were in the range of 10.0-276 and 6.76-120 ng/g dw, respectively. The spatial distribution of ∑PFASs in the tree leaves suggested that airborne PFASs could be transported from the center to the surrounding environment by prevailing wind. PMID:24867700

  17. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  18. Achieving production-level use of HEP software at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.

    2015-12-01

    HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.

  19. Learning General Phonological Rules from Distributional Information: A Computational Model

    ERIC Educational Resources Information Center

    Calamaro, Shira; Jarosz, Gaja

    2015-01-01

    Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony…

  20. Polytopol computing for multi-core and distributed systems

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan

    2009-05-01

    Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.

  1. Distributed sequence alignment applications for the public computing architecture.

    PubMed

    Pellicer, S; Chen, G; Chan, K C C; Pan, Y

    2008-03-01

    The public computer architecture shows promise as a platform for solving fundamental problems in bioinformatics such as global gene sequence alignment and data mining with tools such as the basic local alignment search tool (BLAST). Our implementation of these two problems on the Berkeley open infrastructure for network computing (BOINC) platform demonstrates a runtime reduction factor of 1.15 for sequence alignment and 16.76 for BLAST. While the runtime reduction factor of the global gene sequence alignment application is modest, this value is based on a theoretical sequential runtime extrapolated from the calculation of a smaller problem. Because this runtime is extrapolated from running the calculation in memory, the theoretical sequential runtime would require 37.3 GB of memory on a single system. With this in mind, the BOINC implementation not only offers the reduced runtime, but also the aggregation of the available memory of all participant nodes. If an actual sequential run of the problem were compared, a more drastic reduction in the runtime would be seen due to an additional secondary storage I/O overhead for a practical system. Despite the limitations of the public computer architecture, most notably in communication overhead, it represents a practical platform for grid- and cluster-scale bioinformatics computations today and shows great potential for future implementations. PMID:18334454

  2. Assessment of the Distribution of Toxic Release Inventory Facilities in Metropolitan Charleston: An Environmental Justice Case Study

    PubMed Central

    Fraser-Rahim, Herb; Williams, Edith; Zhang, Hongmei; Rice, LaShanta; Svendsen, Erik; Abara, Winston

    2012-01-01

    Objectives. We assessed spatial disparities in the distribution of Toxic Release Inventory (TRI) facilities in Charleston, SC. Methods. We used spatial methods and regression to assess burden disparities in the study area at the block and census-tract levels by race/ethnicity and socioeconomic status (SES). Results. Results revealed an inverse relationship between distance to TRI facilities and race/ethnicity and SES at the block and census-tract levels. Results of regression analyses showed a positive association between presence of TRI facilities and high percentage non-White and a negative association between number of TRI facilities and high SES. Conclusions. There are burden disparities in the distribution of TRI facilities in Charleston at the block and census-tract level by race/ethnicity and SES. Additional research is needed to understand cumulative risk in the region. PMID:22897529

  3. Experimental and computational studies of fatty acid distribution networks.

    PubMed

    Liu, Yong; Buendía-Rodríguez, Germán; Peñuelas-Rívas, Claudia Giovanna; Tan, Zhiliang; Rívas-Guevara, María; Tenorio-Borroto, Esvieta; Munteanu, Cristian R; Pazos, Alejandro; González-Díaz, Humberto

    2015-11-01

    Unbalanced uptake of Omega 6/Omega 3 (ω-6/ω-3) ratios could increase chronic disease occurrences, such as inflammation, atherosclerosis, or tumor proliferation, and methylation methods for measuring the ruminal microbiome fatty acid (FA) composition/distribution play a vital role in discovering the contribution of food components to ruminant products (e.g., meat and milk) when pursuing a healthy diet. Hansch's models based on Linear Free Energy Relationships (LFERs) using physicochemical parameters, such as partition coefficients, molar refractivity, and polarizability, as input variables (Vk) are advocated. In this work, a new combined experimental and theoretical strategy was proposed to study the effect of ω-6/ω-3 ratios, FA chemical structure, and other factors over FA distribution networks in the ruminal microbiome. In step 1, experiments were carried out to measure long chain fatty acid (LCFA) profiles in the rumen microbiome (bacterial and protozoan), and volatile fatty acids (VFAs) in fermentation media. In step 2, the proportions and physicochemical parameter values of LCFAs and VFAs were calculated under different boundary conditions (cj) like c1 = acid and/or base methylation treatments, c2 = with/without fermentation, c3 = FA distribution phase (media, bacterial, or protozoan microbiome), etc. In step 3, Perturbation Theory (PT) and LFER ideas were combined to develop a PT-LFER model of a FA distribution network using physicochemical parameters (V(k)), the corresponding Box-Jenkins (ΔV(kj)) and PT operators (ΔΔV(kj)) in statistical analysis. The best PT-LFER model found predicted the effects of perturbations over the FA distribution network with sensitivity, specificity, and accuracy > 80% for 407 655 cases in training + external validation series. In step 4, alternative PT-LFER and PT-NLFER models were tested for training Linear and Non-Linear Artificial Neural Networks (ANNs). PT-NLFER models based on ANNs presented better performance but are

  4. Automation of the CFD Process on Distributed Computing Systems

    NASA Technical Reports Server (NTRS)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational

  5. The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Klump, J.; Thaler, J.

    2012-04-01

    Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times

  6. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1988-01-01

    The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.

  7. Reviews of computing technology: Fiber distributed data interface. Revision

    SciTech Connect

    Johnson, A.J.

    1992-04-01

    This technology report describes Fiber Distributed Data Interface (FDDI) as a technology, looks at the applications of this technology, examines the current economics of using it, and describe activities and plans by the Information Resource Management Department to implement this technology at the Savannah River Site.

  8. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect

    Johnson, A.J.

    1992-04-01

    This technology report describes Fiber Distributed Data Interface (FDDI) as a technology, looks at the applications of this technology, examines the current economics of using it, and describe activities and plans by the Information Resource Management Department to implement this technology at the Savannah River Site.

  9. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  10. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  11. Application of a distributed network in computational fluid dynamic simulations

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish

    1994-01-01

    A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.

  12. Lilith: A scalable secure tool for massively parallel distributed computing

    SciTech Connect

    Armstrong, R.C.; Camp, L.J.; Evensky, D.A.; Gentile, A.C.

    1997-06-01

    Changes in high performance computing have necessitated the ability to utilize and interrogate potentially many thousands of processors. The ASCI (Advanced Strategic Computing Initiative) program conducted by the United States Department of Energy, for example, envisions thousands of distinct operating systems connected by low-latency gigabit-per-second networks. In addition multiple systems of this kind will be linked via high-capacity networks with latencies as low as the speed of light will allow. Code which spans systems of this sort must be scalable; yet constructing such code whether for applications, debugging, or maintenance is an unsolved problem. Lilith is a research software platform that attempts to answer these questions with an end toward meeting these needs. Presently, Lilith exists as a test-bed, written in Java, for various spanning algorithms and security schemes. The test-bed software has, and enforces, hooks allowing implementation and testing of various security schemes.

  13. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  14. Partitioning problems in parallel, pipelined, and distributed computing

    SciTech Connect

    Bokhari, S.H.

    1988-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple-computer system is addressed. A sum-bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple-satellite system: partitioning multiple chain-structured parallel programs, multiple arbitrarily structured serial programs, and single-tree structured parallel programs. In addition, the problem of partitioning chain-structured parallel programs across chain-connected systems is solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple-computer architectures for a wide range of problems of practical interest.

  15. [Computer simulated images of radiopharmaceutical distributions in anthropomorphic phantoms

    SciTech Connect

    Not Available

    1991-05-17

    We have constructed an anatomically correct human geometry, which can be used to store radioisotope concentrations in 51 various internal organs. Each organ is associated with an index number which references to its attenuating characteristics (composition and density). The initial development of Computer Simulated Images of Radiopharmaceuticals in Anthropomorphic Phantoms (CSIRDAP) over the first 3 years has been very successful. All components of the simulation have been coded, made operational and debugged.

  16. Method for computing the optimal signal distribution and channel capacity.

    PubMed

    Shapiro, E G; Shapiro, D A; Turitsyn, S K

    2015-06-15

    An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet. PMID:26193496

  17. Characterizing W-2 SLSF experiment temperature oscillations using computer graphics. [Sodium Loop Safety Facility

    SciTech Connect

    Smith, D.E.

    1983-06-23

    The W-2 SLSF (Sodium Loop Safety Facility) experiment was an instrumented in-reactor test performed to characterize the failure response of full-length, preconditioned LMFBR prototypic fuel pins to slow transient overpower (TOP) conditions. Although the test results were expected to confirm analytical predictions of upper level failure and fuel expulsion, an axial midplane failure was experienced. Extensive post-test analyses were conducted to understand all of the unexpected behavior in the experiment. (1) The initial post-test effort focused on the temperature oscillations recorded by the 54 thermocouples used in the experiment. In order to synthesize the extensive data records and identify patterns of behavior in the data records, a computer-generated film was used to present the temperature data recorded during the experiment.

  18. [Elderlies in street situation or social vulnerability: facilities and difficulties in the use of computational tools].

    PubMed

    Frias, Marcos Antonio da Eira; Peres, Heloisa Helena Ciqueto; Pereira, Valclei Aparecida Gandolpho; Negreiros, Maria Célia de; Paranhos, Wana Yeda; Leite, Maria Madalena Januário

    2014-01-01

    This study aimed to identify the advantages and difficulties encountered by older people living on the streets or social vulnerability, to use the computer or internet. It is an exploratory qualitative research, in which five elderlies, attended on a non-governmental organization located in the city of São Paulo, have participated. The discourses were analyzed by content analysis technique and showed, as facilities, among others, to clarify doubts with the monitors, the stimulus for new discoveries coupled with proactivity and curiosity, and develop new skills. The mentioned difficulties were related to physical or cognitive issues, lack of instructor, and lack of knowledge to interact with the machine. The studies focusing on the elderly population living on the streets or in social vulnerability may contribute with evidence to guide the formulation of public policies to this population. PMID:25517671

  19. Analysis of neutron flux distribution for the validation of computational methods for the optimization of research reactor utilization.

    PubMed

    Snoj, L; Trkov, A; Jaćimović, R; Rogan, P; Zerovnik, G; Ravnik, M

    2011-01-01

    In order to verify and validate the computational methods for neutron flux calculation in TRIGA research reactor calculations, a series of experiments has been performed. The neutron activation method was used to verify the calculated neutron flux distribution in the TRIGA reactor. Aluminium (99.9 wt%)-Gold (0.1 wt%) foils (disks of 5mm diameter and 0.2mm thick) were irradiated in 33 locations; 6 in the core and 27 in the carrousel facility in the reflector. The experimental results were compared to the calculations performed with Monte Carlo code MCNP using detailed geometrical model of the reactor. The calculated and experimental normalized reaction rates in the core are in very good agreement for both isotopes indicating that the material and geometrical properties of the reactor core are modelled well. In conclusion one can state that our computational model describes very well the neutron flux and reaction rate distribution in the reactor core. In the reflector however, the accuracy of the epithermal and thermal neutron flux distribution and attenuation is lower, mainly due to lack of information about the material properties of the graphite reflector surrounding the core, but the differences between measurements and calculations are within 10%. Since our computational model properly describes the reactor core it can be used for calculations of reactor core parameters and for optimization of research reactor utilization. PMID:20855215

  20. A design study for the upgraded ALICE O2 computing facility

    NASA Astrophysics Data System (ADS)

    Richter, Matthias

    2015-12-01

    An upgrade of the ALICE detector is currently prepared for the Run 3 period of the Large Hadron Collider (LHC) at CERN starting in 2020. The physics topics under study by ALICE during this period will require the inspection of all collisions at a rate of 50 kHz for minimum bias Pb-Pb and 200 kHz for pp and p-Pb collisions in order to extract physics signals embedded into a large background. The upgraded ALICE detector will produce more than 1 TByte/s of data. Both collision and data rate impose new challenges onto the detector readout and compute system. Some detectors will not use a triggered readout, which will require a continuous processing of the detector data. The challenging requirements will be met by a combined online and offline facility developed and managed by the ALICE O2 project. The combined facility will accommodate the necessary substantial increase of data taking rate. In this paper we present first results of a prototype with estimates for scalability and feasibility for a full scale system.

  1. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One

  2. Computational methods for the control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Cliff, E. M.; Powers, R. K.

    1985-01-01

    It is shown that care must be taken to ensure that finite dimensional approximations of distributed parameter systems preserve important system properties (i.e., controllability, observability, stabilizability, detectability, etc.). It is noted that, if the particular scheme used to construct the finite dimensional model does not take into account these system properties, the model may not be suitable for control design and analysis. These ideas are illustrated by a simple example, i.e., a cable-spring-mass system.

  3. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  4. Distributed process manager for an engineering network computer

    SciTech Connect

    Gait, J.

    1987-08-01

    MP is a manager for systems of cooperating processes in a local area network of engineering workstations. MP supports transparent continuation by maintaining multiple copies of each process on different workstations. Computational bandwidth is optimized by executing processes in parallel on different workstations. Responsiveness is high because workstations compete among themselves to respond to requests. The technique is to select a master from among a set of replicates of a process by a competitive election between the copies. Migration of the master when a fault occurs or when response slows down is effected by inducing the election of a new master. Competitive response stabilizes system behavior under load, so MP exhibits realtime behaviors.

  5. The design of scalable software libraries for distributed memory concurrent computers

    SciTech Connect

    Choi, J.; Walker, D.W.; Dongarra, J.J. |

    1994-12-31

    This paper describes the design of ScaLAPACK, a scalable software library for performing dense and banded linear algebra computations on distributed memory concurrent computers. The specification of the data distribution has important consequences for interprocessor communication and load balance, and hence is a major factor in determining performance and scalability of the library routines. The block cyclic data distribution is adopted as a simple, yet general-purpose, way of decomposing block-partitioned matrices. Distributed memory versions of the Level 3 BLAS provide an easy and convenient way of implementing the ScaLAPACK routines.

  6. Applications Analysis: Principles and Examples from Various Distributed Computer Applications at Sandia National Laboratories New Mexico

    SciTech Connect

    Bateman, Dennis; Evans, David; Jensen, Dal; Nelson, Spencer

    1999-08-01

    As information systems have become distributed over many computers within the enterprise, managing those applications has become increasingly important. This is an emerging area of work, recognized as such by many large organizations as well as many start-up companies. In this report, we present a summary of the move to distributed applications, some of the problems that came along for the ride, and some specific examples of the tools and techniques we have used to analyze distributed applications and gain some insight into the mechanics and politics of distributed computing.

  7. Intercommunications in Real Time, Redundant, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Zanger, H.

    1980-01-01

    An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.

  8. Impact of distributed energy resources on the reliability of a critical telecommunications facility.

    SciTech Connect

    Robinson, David; Zuffranieri, Jason V.; Atcitty, Christopher B.; Arent, Douglas

    2006-03-01

    This report documents a probabilistic risk assessment of an existing power supply system at a large telecommunications office. The focus is on characterizing the increase in the reliability of power supply through the use of two alternative power configurations. Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages. A logical approach to improve the robustness of telecommunication facilities would be to increase the depth and breadth of technologies available to restore power in the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.

  9. Radon gas distribution in natural gas processing facilities and workplace air environment.

    PubMed

    Al-Masri, M S; Shwiekani, R

    2008-04-01

    Evaluation was made of the distribution of radon gas and radiation exposure rates in the four main natural gas treatment facilities in Syria. The results showed that radiation exposure rates at contact of all equipment were within the natural levels (0.09-0.1 microSvh(-1)) except for the reflex pumps where a dose rate value of 3 microSvh(-1) was recorded. Radon concentrations in Syrian natural gas varied between 15.4 Bq m(-3) and 1141 Bq m(-3); natural gas associated with oil production was found to contain higher concentrations than the non-associated natural gas. In addition, radon concentrations were higher in the central processing facilities than the wellheads; these high levels are due to pressurizing and concentrating processes that enhance radon gas and its decay products. Moreover, the lowest 222Rn concentration was in the natural gas fraction used for producing sulfur; a value of 80 Bq m(-3) was observed. On the other hand, maximum radon gas and its decay product concentrations in workplace air environments were found to be relatively high in the gas analysis laboratories; a value of 458 Bq m(-3) was observed. However, all reported levels in the workplaces in the four main stations were below the action level set by IAEA for chronic exposure situations involving radon, which is 1000 Bq m(-3). PMID:17905489

  10. Enabling Extreme Scale Earth Science Applications at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Mozdzynski, G.; Hamrud, M.; Deconinck, W.; Smith, L.; Hack, J.

    2014-12-01

    The Oak Ridge Leadership Facility (OLCF), established at the Oak Ridge National Laboratory (ORNL) under the auspices of the U.S. Department of Energy (DOE), welcomes investigators from universities, government agencies, national laboratories and industry who are prepared to perform breakthrough research across a broad domain of scientific disciplines, including earth and space sciences. Titan, the OLCF flagship system, is currently listed as #2 in the Top500 list of supercomputers in the world, and the largest available for open science. The computational resources are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, sponsored by the U.S. DOE Office of Science. In 2014, over 2.25 billion core hours on Titan were awarded via INCITE projects., including 14% of the allocation toward earth sciences. The INCITE competition is also open to research scientists based outside the USA. In fact, international research projects account for 12% of the INCITE awards in 2014. The INCITE scientific review panel also includes 20% participation from international experts. Recent accomplishments in earth sciences at OLCF include the world's first continuous simulation of 21,000 years of earth's climate history (2009); and an unprecedented simulation of a magnitude 8 earthquake over 125 sq. miles. One of the ongoing international projects involves scaling the ECMWF Integrated Forecasting System (IFS) model to over 200K cores of Titan. ECMWF is a partner in the EU funded Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project. The significance of the research carried out within this project is the demonstration of techniques required to scale current generation Petascale capable simulation codes towards the performance levels required for running on future Exascale systems. One of the techniques pursued by ECMWF is to use Fortran2008 coarrays to overlap computations and communications and

  11. Results of computer calculations for a simulated distribution of kidney cells

    NASA Technical Reports Server (NTRS)

    Micale, F. J.

    1985-01-01

    The results of computer calculations for a simulated distribution of kidney cells are given. The calculations were made for different values of electroosmotic flow, U sub o, and the ratio of sample diameter to channel diameter, R.

  12. Innovation of laboratory exercises in course Distributed systems and computer networks

    NASA Astrophysics Data System (ADS)

    Souček, Pavel; Slavata, Oldřich; Holub, Jan

    2013-09-01

    This paper is focused on innovation of laboratory exercises in course Distributed Systems and Computer Networks. These exercises were introduced in November of 2012 and replaced older exercises in order to reflect real life applications.

  13. Computing distribution of scale independent motifs in biological sequences

    PubMed Central

    Almeida, Jonas S; Vinga, Susana

    2006-01-01

    The use of Chaos Game Representation (CGR) or its generalization, Universal Sequence Maps (USM), to describe the distribution of biological sequences has been found objectionable because of the fractal structure of that coordinate system. Consequently, the investigation of distribution of symbolic motifs at multiple scales is hampered by an inexact association between distance and sequence dissimilarity. A solution to this problem could unleash the use of iterative maps as phase-state representation of sequences where its statistical properties can be conveniently investigated. In this study a family of kernel density functions is described that accommodates the fractal nature of iterative function representations of symbolic sequences and, consequently, enables the exact investigation of sequence motifs of arbitrary lengths in that scale-independent representation. Furthermore, the proposed kernel density includes both Markovian succession and currently used alignment-free sequence dissimilarity metrics as special solutions. Therefore, the fractal kernel described is in fact a generalization that provides a common framework for a diverse suite of sequence analysis techniques. PMID:17049089

  14. Chemical fate and transport of atrazine in soil gravel materials at agrichemical distribution facilities

    USGS Publications Warehouse

    Roy, W.R.; Krapac, I.G.; Chou, S.-F.J.

    1999-01-01

    The gravel commonly used to cover parking lots and roadways at retail agrichemical facilities may contain relatively large concentrations of pesticides that resulted from past management problems. These pesticides may threaten groundwater quality. Previous studies, however, suggested that the pesticides had not moved from the gravel in several sample profiles. Excavations at a closed facility revealed tremendous variability in pesticide distribution within the site. Pesticides were present below the gravel in two profiles, but the mechanism(s) for their movement were not clear. The objectives of this study were to investigate how the physical and chemical properties of the gravel influence the environmental fate of atrazine. All of the gravel samples collected and characterized contained atrazine and sufficient organic C to adsorb significant amounts of atrazine, thus retarding its movement through the gravel. Laboratory column leaching experiments, however, suggested that much of the atrazine should leach from the gravel within a year or two. A field-scale test plot was constructed to study how atrazine moves through the gravel under controlled conditions. Atrazine was "spilled" in the test plot. Atrazine moved from the gravel both vertically and horizontally. It appears that formulated product spilled on gravel will leach. A single discrete spill can give rise to phantom spills whose occurrence and distribution is not related to any specific pesticide-management practice. The apparent lack of atrazine leaching from gravel appeared to be a transient phenomenon and/or the result of sampling limitations in previous studies. The contaminated gravel clearly poses a risk to groundwater quality.

  15. A new taxonomy for distributed computer systems based upon operating system structure

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1985-01-01

    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.

  16. Computation of Multimodal Size-Velocity-Temperature Spray Distribution Functions

    NASA Astrophysics Data System (ADS)

    Archambault, Mark R.

    2002-09-01

    An alternative approach to modeling spray flows-one which does not involve simulation or stochastic integration is to directly compute the evolution of the probability density function (PDF) describing the drops. The purpose of this paper is to continue exploring an alternative method of solving the spray flow problem. The approach is to derive and solve a set of Eulerian moment transport equations for the quantities of interest in the spray, coupled with the appropriate gas-phase (Eulerian) equations. A second purpose is to continue to explore how a maximum-entropy criterion may be used to provide closure for such a moment-based model. The hope is to further develop an Eulerian-Eulerian model that will permit one to solve for detailed droplet statistics directly without the use of stochastic integration or post-averaging of simulations.

  17. Survivable algorithms and redundancy management in NASA's distributed computing systems

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw

    1992-01-01

    The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given.

  18. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect

    Foster, I.; Koenig, G.; Tuecke, S.

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  19. Secure Large-Scale Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Dan (Technical Monitor)

    2001-01-01

    To fully conduct research that will support the far-term concepts, technologies and methods required to improve the safety of Air Transportation a simulation environment of the requisite degree of fidelity must first be in place. The Virtual National Airspace Simulation (VNAS) will provide the underlying infrastructure necessary for such a simulation system. Aerospace-specific knowledge management services such as intelligent data-integration middleware will support the management of information associated with this complex and critically important operational environment. This simulation environment, in conjunction with a distributed network of supercomputers, and high-speed network connections to aircraft, and to Federal Aviation Administration (FAA), airline and other data-sources will provide the capability to continuously monitor and measure operational performance against expected performance. The VNAS will also provide the tools to use this performance baseline to obtain a perspective of what is happening today and of the potential impact of proposed changes before they are introduced into the system.

  20. Memory intensive functional architecture for distributed computer control systems

    SciTech Connect

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector.

  1. Execution models for mapping programs onto distributed memory parallel computers

    NASA Technical Reports Server (NTRS)

    Sussman, Alan

    1992-01-01

    The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.

  2. Pit Distribution Design for Computer-Generated Waveguide Holography

    NASA Astrophysics Data System (ADS)

    Yagi, Shogo; Imai, Tadayuki; Ueno, Masahiro; Ohtani, Yoshimitsu; Endo, Masahiro; Kurokawa, Yoshiaki; Yoshikawa, Hiroshi; Watanabe, Toshifumi; Fukuda, Makoto

    2008-02-01

    Multilayered waveguide holography (MWH) is one of a number of page-oriented data multiplexing holographies that will be applied to optical data storage and three-dimensional (3D) moving images. While conventional volumetric holography using photopolymer or photorefractive materials requires page-by-page light exposure for recording, MWH media can be made by employing stamping and laminating technologies that are suitable for mass production. This makes devising an economical mastering technique for replicating holograms a key issue. In this paper, we discuss an approach to pit distribution design that enables us to replace expensive electron beam mastering with economical laser beam mastering. We propose an algorithm that avoids the overlapping of even comparatively large adjacent pits when we employ laser beam mastering. We also compensate for the angular dependence of the diffraction power, which strongly depends on pit shape, by introducing an enhancement profile so that a diffracted image has uniform intensity.

  3. A new distributed computing model of mobile spatial information service grid based on mobile agent

    NASA Astrophysics Data System (ADS)

    Tian, Gen; Liu, Miao-long

    2009-10-01

    A new distributed computing model of mobile spatial information service is studied based on grid computing environment. Key technologies are presented in the model, including mobile agent (MA) distributed computing, grid computing, spatial data model, location based service (LBS), global positioning system (GPS), code division multiple access (CDMA), transfer control protocol/internet protocol (TCP/IP), and user datagram protocol (UDP). In order to deal with the narrow bandwidth and instability of the wireless internet, distributed organization of tremendous spatial data, limited processing speed and low memory of mobile devices, a new mobile agent based mobile spatial information service grid (MSISG) architecture is further proposed that has good load balance, high processing efficiency, less network communication and thus suitable for mobile distributed computing environment. It can provide applications of spatial information distributed computing and mobile service. The theories and technologies architecture of MSISG are built originally from the base, including spatial information mobile agent model, distributed grid geographic information system (GIS) server model, mobile agent server model and mobile GIS client model. An application system for MSISG is therefore developed authorship by visual c++ and embedded visual c++. A field test is carried out through this system in Shanghai, and the results show that the proposed model and methods are feasible and adaptable for mobile spatial information service.

  4. A new distributed computing model of mobile spatial information service grid based on mobile agent

    NASA Astrophysics Data System (ADS)

    Tian, Gen; Liu, Miao-long

    2008-10-01

    A new distributed computing model of mobile spatial information service is studied based on grid computing environment. Key technologies are presented in the model, including mobile agent (MA) distributed computing, grid computing, spatial data model, location based service (LBS), global positioning system (GPS), code division multiple access (CDMA), transfer control protocol/internet protocol (TCP/IP), and user datagram protocol (UDP). In order to deal with the narrow bandwidth and instability of the wireless internet, distributed organization of tremendous spatial data, limited processing speed and low memory of mobile devices, a new mobile agent based mobile spatial information service grid (MSISG) architecture is further proposed that has good load balance, high processing efficiency, less network communication and thus suitable for mobile distributed computing environment. It can provide applications of spatial information distributed computing and mobile service. The theories and technologies architecture of MSISG are built originally from the base, including spatial information mobile agent model, distributed grid geographic information system (GIS) server model, mobile agent server model and mobile GIS client model. An application system for MSISG is therefore developed authorship by visual c++ and embedded visual c++. A field test is carried out through this system in Shanghai, and the results show that the proposed model and methods are feasible and adaptable for mobile spatial information service.

  5. A European Federated Cloud: Innovative distributed computing solutions by EGI

    NASA Astrophysics Data System (ADS)

    Sipos, Gergely; Turilli, Matteo; Newhouse, Steven; Kacsuk, Peter

    2013-04-01

    The European Grid Infrastructure (EGI) is the result of pioneering work that has, over the last decade, built a collaborative production infrastructure of uniform services through the federation of national resource providers that supports multi-disciplinary science across Europe and around the world. This presentation will provide an overview of the recently established 'federated cloud computing services' that the National Grid Initiatives (NGIs), operators of EGI, offer to scientific communities. The presentation will explain the technical capabilities of the 'EGI Federated Cloud' and the processes whereby earth and space science researchers can engage with it. EGI's resource centres have been providing services for collaborative, compute- and data-intensive applications for over a decade. Besides the well-established 'grid services', several NGIs already offer privately run cloud services to their national researchers. Many of these researchers recently expressed the need to share these cloud capabilities within their international research collaborations - a model similar to the way the grid emerged through the federation of institutional batch computing and file storage servers. To facilitate the setup of a pan-European cloud service from the NGIs' resources, the EGI-InSPIRE project established a Federated Cloud Task Force in September 2011. The Task Force has a mandate to identify and test technologies for a multinational federated cloud that could be provisioned within EGI by the NGIs. A guiding principle for the EGI Federated Cloud is to remain technology neutral and flexible for both resource providers and users: • Resource providers are allowed to use any cloud hypervisor and management technology to join virtualised resources into the EGI Federated Cloud as long as the site is subscribed to the user-facing interfaces selected by the EGI community. • Users can integrate high level services - such as brokers, portals and customised Virtual Research

  6. A Computer Program for Estimating True-Score Distributions and Graduating Observed-Score Distributions

    ERIC Educational Resources Information Center

    Wingersky, Marilyn S.; and others

    1969-01-01

    One in a series of nine articles in a section entitled, "Electronic Computer Program and Accounting Machine Procedures. Research supported in part by contract Nonr-2752(00) from the Office of Naval Research.

  7. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on

  8. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  9. A Framework for a Computer System to Support Distributed Cooperative Learning

    ERIC Educational Resources Information Center

    Chiu, Chiung-Hui

    2004-01-01

    To develop a computer system to support cooperative learning among distributed students; developers should consider the foundations of cooperative learning. This article examines the basic elements that make cooperation work and proposes a framework for such computer supported cooperative learning (CSCL) systems. This framework is constituted of…

  10. Distributed design tools: Mapping targeted design tools onto a Web-based distributed architecture for high-performance computing

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Poore, C.A.

    1999-11-30

    Design Tools use a Web-based Java interface to guide a product designer through the design-to-analysis cycle for a specific, well-constrained design problem. When these Design Tools are mapped onto a Web-based distributed architecture for high-performance computing, the result is a family of Distributed Design Tools (DDTs). The software components that enable this mapping consist of a Task Sequencer, a generic Script Execution Service, and the storage of both data and metadata in an active, object-oriented database called the Product Database Operator (PDO). The benefits of DDTs include improved security, reliability, scalability (in both problem size and computing hardware), robustness, and reusability. In addition, access to the PDO unlocks its wide range of services for distributed components, such as lookup and launch capability, persistent shared memory for communication between cooperating services, state management, event notification, and archival of design-to-analysis session data.

  11. Model description of storage and infiltration functions of infiltration facilities for urban runoff analysis by a distributed model.

    PubMed

    Furumai, H; Jinadasa, H K P K; Murakami, M; Nakajima, F; Aryal, R K

    2005-01-01

    Although there have been simulation researches focusing on reduction of stormwater peak flow by introduced infiltration facilities, model simulation of dynamic runoff behavior is still limited for frequently occurring rainfall events with weak intensity. Therefore, dynamic simulation was carried out in two urban drainages with infiltration facilities incorporated with a distributed model using two methods for describing functions of infiltration facilities. A method adjusting effective rainfall model gave poor simulation of runoff behavior in light rainfalls. Another method considering dynamic change of storage capacity as well as infiltration rate gave satisfactory estimation of the runoff in both drainages. In addition, assumption of facility clogging improved the agreement between measured and simulated hydrographs in small and medium-sized rainfall. Therefore, the proposed method might be useful for quantifying the secondary effects of the infiltration facilities on groundwater recharge and urban non-point pollutant trapping as well as runoff reduction. PMID:16248180

  12. A techno-economic analysis of using mobile distributed pyrolysis facilities to deliver a forest residue resource.

    PubMed

    Brown, Duncan; Rowe, Andrew; Wild, Peter

    2013-12-01

    Distributed mobile conversion facilities using either fast pyrolysis or torrefaction processes can be used to convert forest residues to more energy dense substances (bio-oil, bio-slurry or torrefied wood) that can be transported as feedstock for bio-fuel facilities. Results show that the levelised delivered cost of a forest residue resource using mobile facility networks can be lower than using conventional woodchip delivery methods under appropriate conditions. Torrefied wood is the lowest cost pathway of delivering a forest residue resource when using mobile facilities. Cost savings occur against woodchip delivery for annual forest residue harvests above 2.5 million m(3) or when transport distances greater than 300 km are required. Important parameters that influence levelised delivered costs are transport distances (forest residue spatial density), haul cost factors, and initial moisture content of forest residues. Relocating mobile facilities can be optimised for lowest cost delivery as transport distances of raw biomass are reduced. PMID:24185419

  13. Automated CFD Parameter Studies on Distributed Parallel Computers

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Aftosmis, Michael; Pandya, Shishir; Tejnil, Edward; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The objective of the current work is to build a prototype software system which will automated the process of running CFD jobs on Information Power Grid (IPG) resources. This system should remove the need for user monitoring and intervention of every single CFD job. It should enable the use of many different computers to populate a massive run matrix in the shortest time possible. Such a software system has been developed, and is known as the AeroDB script system. The approach taken for the development of AeroDB was to build several discrete modules. These include a database, a job-launcher module, a run-manager module to monitor each individual job, and a web-based user portal for monitoring of the progress of the parameter study. The details of the design of AeroDB are presented in the following section. The following section provides the results of a parameter study which was performed using AeroDB for the analysis of a reusable launch vehicle (RLV). The paper concludes with a section on the lessons learned in this effort, and ideas for future work in this area.

  14. Applications of computer algebra to distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Storch, Joel A.

    1993-01-01

    In the analysis of vibrations of continuous elastic systems, one often encounters complicated transcendental equations with roots directly related to the system's natural frequencies. Typically, these equations contain system parameters whose values must be specified before a numerical solution can be obtained. The present paper presents a method whereby the fundamental frequency can be obtained in analytical form to any desired degree of accuracy. The method is based upon truncation of rapidly converging series involving inverse powers of the system natural frequencies. A straightforward method to developing these series and summing them in closed form is presented. It is demonstrated how Computer Algebra can be exploited to perform the intricate analytical procedures which otherwise would render the technique difficult to apply in practice. We illustrate the method by developing two analytical approximations to the fundamental frequency of a vibrating cantilever carrying a rigid tip body. The results are compared to the numerical solution of the exact (transcendental) frequency equation over a range of system parameters.

  15. DISTRIBUTION OF LEGIONELLA PNEUMOPHILA SEROGROUPS ISOLATED FROM WATER SYSTEMS OF PUBLIC FACILITIES IN BUSAN, SOUTH KOREA.

    PubMed

    Hwang, In-Yeong; Park, Eun-Hee; Park, Yon-Koung; Park, Sun-Hee; Sung, Gyung-Hye; Park, Hye-Young; Lee, Young-Choon

    2016-05-01

    Legionella pneumophila is the major causes of legionellosis worldwide. The distribution of L. pneumophila was investigated in water systems of public facilities in Busan, South Korea during 2007 and 2013-2014. L. pneumophila was isolated from 8.3% of 3,055 samples, of which the highest isolation rate (49%) was from ships and the lowest 4% from fountains. Serogroups of L. pneumophila isolated in 2007 were distributed among serogroups (sgs) 1-7 with the exception of sg 4, while those of isolates during 2013 and 2014 included also 11 sgs ( 1, 2, 3, 4, 5, 6, 7, 8, 12, 13, 15). L. pneumophila sg 1 was predominated among isolates from fountains (75%), hotels (60%), buildings (44%), hospitals (38%), and public baths (37%), whereas sg 3 and sg 7 was the most prevalent from ships (46%) and factories (40%), respectively. The predominated serogroup of L. pneumophila isolates from hot and cooling tower water was sg 1 (35% and 46%, respectively), while from cold water was sg 3 (29%). These results should be useful for epidemiological surveys to identify sources of outbreaks of legionellosis in Busan, South Korea. PMID:27405130

  16. Sentinel-1 Data System at the Alaska Satellite Facility Distributed Active Archive Center

    NASA Astrophysics Data System (ADS)

    Wolf, V. G.

    2014-12-01

    The Alaska Satellite Facility Distributed Active Archive Center (ASF DAAC) has a long history of supporting international collaborations between NASA and foreign flight agencies to promote access to Synthetic Aperture Radar (SAR) data for US science research. Based on the agreement between the US and the EC, data from the Sentinel missions will be distributed by NASA through archives that mirror those established by ESA. The ASF DAAC is the designated archive and distributor for Sentinel-1 data. The data will be copied from the ESA archive to a rolling archive at the NASA Goddard center, and then pushed to a landing area at the ASF DAAC. The system at ASF DAAC will take the files as they arrive and put them through an ingest process. Ingest will populate the database with the information required to enable search and download of the data through Vertex, the ASF DAAC user interface. Metadata will be pushed to the NASA Common Metadata Repository, enabling data discovery through clients that utilize the repository. Visual metadata will be pushed to the NASA GIBS system for visualization through clients linked to that system. Data files will be archived in the DataDirect Networks (DDN) device that is the primary storage device for the ASF DAAC. A backup copy of the data will be placed in a second DDN device that serves as the disaster recovery solution for the ASF DAAC.

  17. Impact of Nitrification on the Formation of N-Nitrosamines and Halogenated Disinfection Byproducts within Distribution System Storage Facilities.

    PubMed

    Zeng, Teng; Mitch, William A

    2016-03-15

    Distribution system storage facilities are a critical, yet often overlooked, component of the urban water infrastructure. This study showed elevated concentrations of N-nitrosodimethylamine (NDMA), total N-nitrosamines (TONO), regulated trihalomethanes (THMs) and haloacetic acids (HAAs), 1,1-dichloropropanone (1,1-DCP), trichloroacetaldehyde (TCAL), haloacetonitriles (HANs), and haloacetamides (HAMs) in waters with ongoing nitrification as compared to non-nitrifying waters in storage facilities within five different chloraminated drinking water distribution systems. The concentrations of NDMA, TONO, HANs, and HAMs in the nitrifying waters further increased upon application of simulated distribution system chloramination. The addition of a nitrifying biofilm sample collected from a nitrifying facility to its non-nitrifying influent water led to increases in N-nitrosamine and halogenated DBP formation, suggesting the release of precursors from nitrifying biofilms. Periodic treatment of two nitrifying facilities with breakpoint chlorination (BPC) temporarily suppressed nitrification and reduced precursor levels for N-nitrosamines, HANs, and HAMs, as reflected by lower concentrations of these DBPs measured after re-establishment of a chloramine residual within the facilities than prior to the BPC treatment. However, BPC promoted the formation of halogenated DBPs while a free chlorine residual was maintained. Strategies that minimize application of free chlorine while preventing nitrification are needed to control DBP precursor release in storage facilities. PMID:26859842

  18. Rocket Engine Turbine Blade Surface Pressure Distributions Experiment and Computations

    NASA Technical Reports Server (NTRS)

    Hudson, Susan T.; Zoladz, Thomas F.; Dorney, Daniel J.; Turner, James (Technical Monitor)

    2002-01-01

    Understanding the unsteady aspects of turbine rotor flow fields is critical to successful future turbine designs. A technology program was conducted at NASA's Marshall Space Flight Center to increase the understanding of unsteady environments for rocket engine turbines. The experimental program involved instrumenting turbine rotor blades with miniature surface mounted high frequency response pressure transducers. The turbine model was then tested to measure the unsteady pressures on the rotor blades. The data obtained from the experimental program is unique in two respects. First, much more unsteady data was obtained (several minutes per set point) than has been possible in the past. Also, an extensive steady performance database existed for the turbine model. This allowed an evaluation of the effect of the on-blade instrumentation on the turbine's performance. A three-dimensional unsteady Navier-Stokes analysis was also used to blindly predict the unsteady flow field in the turbine at the design operating conditions and at +15 degrees relative incidence to the first-stage rotor. The predicted time-averaged and unsteady pressure distributions show good agreement with the experimental data. This unique data set, the lessons learned for acquiring this type of data, and the improvements made to the data analysis and prediction tools are contributing significantly to current Space Launch Initiative turbine airflow test and blade surface pressure prediction efforts.

  19. Lower bounds on parallel, distributed, and automata computations

    SciTech Connect

    Gereb-Graus, M.

    1989-01-01

    In this thesis the author presents a collection of lower bound results from several areas of computer science. Conventional wisdom states that lower bounds are much more difficult to prove than upper bounds. To get an upper bound one has to demonstrate just one scheme with the appropriate complexity. On the other hand, to prove lower bounds one has to deal with all possible schemes. The difficulty of lower bounds can be further demonstrated by the fact that wherever for some problem he has a very large gap between the lower and the upper bound, the conjecture for the truth usually is the known upper bound. His first two results are impossibility results for finite state automata. A hierarchy of complexity classes on tree languages (analogous to the polynomial hierarchy) accepted by alternating finite state machines is introduced. It turns out that the alternating class is equal to the well known tree language class accepted by the treeautomata. By separating the deterministic and the nondeterministic classes of his hierarchy he gives a negative answer to the folklore question whether the expressive power of the treeautomata is the same as that of the finite state automaton that can walk on the edges of the tree (bugautomaton). He proves that three-head one-way DFA cannot perform string-matching, that is, no three-head one-way DFA accepts the language L = (x{number sign}y {vert bar} x is a substring of y, where x,y {element of} (0,1){sup *}). He proves that in a one round fair coin flipping (or voting) scheme with n participants, there is at least one participant who has a chance to decide the outcome with probability at least 3/n {minus} o(1/n).

  20. A Hybrid Computer Simulation to Generate the DNA Distribution of a Cell Population.

    ERIC Educational Resources Information Center

    Griebling, John L.; Adams, William S.

    1981-01-01

    Described is a method of simulating the formation of a DNA distribution, on which statistical results and experimentally measured parameters from DNA distribution and percent-labeled mitosis studies are combined. An EAI-680 and DECSystem-10 Hybrid Computer configuration are used. (Author/CS)

  1. Computational investigation of the discharge coefficient of bellmouth flow meters in engine test facilities

    NASA Astrophysics Data System (ADS)

    Sebourn, Charles Lynn

    2002-11-01

    In this thesis computation of the discharge coefficient of bellmouth flow meters installed in engine test facilities is presented. The discharge coefficient is a critical parameter for accurately calculating flow rate in any flow meter which operates by means of creating a pressure differential. Engine airflow is a critical performance parameter and therefore, it is necessary for engine test facilities to accurately measure airflow. In this report the author investigates the use of computational fluid dynamics using finite difference methods to calculate the flow in bellmouth flow meters and hence the discharge coefficient at any measurement station desired. Experimental boundary layer and core flow data was used to verify the capability of the WIND code to calculate the discharge coefficient accurately. Good results were obtained for Reynolds numbers equal to or greater than about three million which is the primary range of interest. After verifying the WIND code performance, results were calculated for a range of Reynolds numbers and Mach numbers. Also the variation in discharge coefficient as a function of measurement location was examined. It is demonstrated that by picking the proper location for pressure measurement, sensitivity to measurement location can be minimized. Also of interest was the effect of bellmouth geometry. Calculations were performed to investigate the effect of duct to bellmouth diameter ratio and the eccentricity of the bellmouth contraction. In general the effects of the beta ratio were seen to be quite small. For the eccentricity, the variation in discharge coefficient was as high as several percent for axial locations less than half a diameter downstream from the throat. The second portion of the thesis examined the effect of a turbofan engine stationed just downstream of the bellmouth flow meter. The study approximated this effect by examining a single fan stage installed in the duct. This calculation was performed by making use of a

  2. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    Requirements are carefully described in descriptions of systems to be acquired but often there is no requirement to provide measurements and performance monitoring to ensure that requirements are met over the long term after acceptance. A set of measurements for various UNIX-based systems will be available at the 1992 Goddard Conference on Mass Storage Systems and Technologies. The authors invite others to contribute to the set of measurements. The framework for presenting the measurements of supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them are given. Production control and database systems are also included. Though other applications and third party software systems are not addressed, it is important to measure them as well. The capability to integrate measurements from all these components from different vendors, and from the third party software systems was recognized and there are efforts to standardize a framework to do this. The measurement activity falls into the domain of management standards. Standards work is ongoing for Open Systems Interconnection (OSI) systems management; AT&T, Digital, and Hewlett-Packard are developing management systems based on this architecture even though it is not finished. Another effort is in the UNIX International Performance Management Working Group. In addition, there are the Open Systems Foundation's Distributed Management Environment and the Object Management Group. A paper comparing the OSI systems management model and the Object Management Group model has been written. The IBM world has had a capability for measurement for various IBM systems since the 1970's and different vendors were able to develop tools for analyzing and viewing these measurements. Since IBM was the only vendor, the user groups were able to lobby IBM for the kinds of measurements needed. In the UNIX world of multiple vendors, a common set of measurements will not be as easy to get.

  3. Analytical formulae for computing dominance from species-abundance distributions.

    PubMed

    Fung, Tak; Villain, Laura; Chisholm, Ryan A

    2015-12-01

    The evenness of an ecological community affects ecosystem structure, functioning and stability, and has implications for biodiversity conservation. In uneven communities, most species are rare while a few dominant species drive ecosystem-level properties. In even communities, dominance is lower, with possibly many species playing key ecological roles. The dominance aspect of evenness can be measured as a decreasing function of the proportion of species required to make up a fixed fraction (e.g., half) of individuals in a community. Here we sought general rules about dominance in ecological communities by linking dominance mathematically to the parameters of common theoretical species-abundance distributions (SADs). We found that if a community's SAD was log-series or lognormal, then dominance was almost inevitably high, with fewer than 40% of species required to account for 90% of all individuals. Dominance for communities with an exponential SAD was lower but still typically high, with fewer than 40% of species required to account for 70% of all individuals. In contrast, communities with a gamma SAD only exhibited high dominance when the average species abundance was below a threshold of approximately 100. Furthermore, we showed that exact values of dominance were highly scale-dependent, exhibiting non-linear trends with changing average species abundance. We also applied our formulae to SADs derived from a mechanistic community model to demonstrate how dominance can increase with environmental variance. Overall, our study provides a rigorous basis for theoretical explorations of the dynamics of dominance in ecological communities, and how this affects ecosystem functioning and stability. PMID:26409166

  4. SAMDIST: A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters

    SciTech Connect

    Leal, L.C.

    1995-01-01

    The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.

  5. SAMDIST: A computer code for calculating statistical distributions for R-matrix resonance parameters

    SciTech Connect

    Leal, L.C.; Larson, N.M.

    1995-09-01

    The SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.

  6. Feasibility Study for a Remote Terminal Central Computing Facility Serving School and College Institutions. Volume II, Preliminary Specifications.

    ERIC Educational Resources Information Center

    International Business Machines Corp., White Plains, NY.

    Preliminary specifications of major equipment and programing systems characteristics for a remote terminal central computing facility serving 25-75 secondary schools are presented. Estimation techniques developed in a previous feasibility study were used to delineate workload demands for four model regions with different numbers of institutions…

  7. Advanced Technology Airfoil Research, volume 1, part 1. [conference on development of computational codes and test facilities

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.

  8. New challenges for HEP computing: RHIC (Relativistic Heavy Ion Collider) and CEBAF (Continuous Electron Beam Accelerator Facility)

    SciTech Connect

    LeVine, M.J. Frankfurt Univ. )

    1990-01-01

    We will look at two facilities; RHIC and CEBF. CEBF is in the construction phase, RHIC is about to begin construction. For each of them, we examine the kinds of physics measurements that motivated their construction, and the implications of these experiments for computing. Emphasis will be on on-line requirements, driven by the data rates produced by these experiments.

  9. Methods of computing vocabulary size for the two-parameter rank distribution

    NASA Technical Reports Server (NTRS)

    Edmundson, H. P.; Fostel, G.; Tung, I.; Underwood, W.

    1972-01-01

    A summation method is described for computing the vocabulary size for given parameter values in the 1- and 2-parameter rank distributions. Two methods of determining the asymptotes for the family of 2-parameter rank-distribution curves are also described. Tables are computed and graphs are drawn relating paris of parameter values to the vocabulary size. The partial product formula for the Riemann zeta function is investigated as an approximation to the partial sum formula for the Riemann zeta function. An error bound is established that indicates that the partial product should not be used to approximate the partial sum in calculating the vocabulary size for the 2-parameter rank distribution.

  10. Postbuckling and large-deflection nonlinear analyses on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for postbuckling and nonlinear static analyses of large complex structures on distributed-memory parallel computers. The strategy is designed for message-passing parallel computer systems. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a nested dissection (or multilevel substructuring) ordering scheme; (3) parallel assembly of global matrices; and (4) a parallel sparse equation solver. The effectiveness of the strategy is assessed by performing thermomechanical postbuckling analyses of stiffened composite panels with cutouts, and nonlinear large-deflection analyses of High Speed Civil Transport models on three distributed-memory computers. The numerical studies presented demonstrate the advantages of nested dissection-based solvers over traditional skyline-based solvers on distributed-memory machines.

  11. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  12. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  13. Computer-aided design drafting/manufacturing (CADD/M) facility preparation

    SciTech Connect

    Norton, F.J.

    1980-09-23

    Computer-Aided Design, Drafting and Manufacturing (CADD/M) equipment requires careful facilities preparation before installation takes place. This paper presents what a company should consider to ensure a proper installation. This includes consideration of working conditions. To get the most out of the system, the operators must be provided with a relaxed, comfortable environment, free from noise and other distractions. Such things as temperature requirements, lighting, power, security and fire protection are discussed. Also, future expansion needs are considered so that major construction will not be required for future years. Advanced planning in these areas is needed to ensure successful implementation of a CADD/M system. This will lead to considerable cost savings, and in the long run, improve the scheduling for an entire project, from initial design to final production. This careful preparation will minimize unplanned events and problem areas. These are ambitious goals but easily realized if a logical and rational plan is adopted in the same manner as that used in a typical development program.

  14. Distributed computing as a virtual supercomputer: Tools to run and manage large-scale BOINC simulations

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni

    2010-08-01

    Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.

  15. Evaluation of Near Field Atmospheric Dispersion Around Nuclear Facilities Using a Lorentzian Distribution Methodology

    SciTech Connect

    Hawkley, Gavin

    2014-01-01

    Atmospheric dispersion modeling within the near field of a nuclear facility typically applies a building wake correction to the Gaussian plume model, whereby a point source is modeled as a plane source. The plane source results in greater near field dilution and reduces the far field effluent concentration. However, the correction does not account for the concentration profile within the near field. Receptors of interest, such as the maximally exposed individual, may exist within the near field and thus the realm of building wake effects. Furthermore, release parameters and displacement characteristics may be unknown, particularly during upset conditions. Therefore, emphasis is placed upon the need to analyze and estimate an enveloping concentration profile within the near field of a release. This investigation included the analysis of 64 air samples collected over 128 wk. Variables of importance were then derived from the measurement data, and a methodology was developed that allowed for the estimation of Lorentzian-based dispersion coefficients along the lateral axis of the near field recirculation cavity; the development of recirculation cavity boundaries; and conservative evaluation of the associated concentration profile. The results evaluated the effectiveness of the Lorentzian distribution methodology for estimating near field releases and emphasized the need to place air-monitoring stations appropriately for complete concentration characterization. Additionally, the importance of the sampling period and operational conditions were discussed to balance operational feedback and the reporting of public dose.

  16. Distribution of trace-element emissions from the liquid-injection incinerator Combustion Research Facility

    SciTech Connect

    Lee, J.W.; Ross, R.W.; Vocque, R.H.; Lewis, J.W.; Waterland, L.R.

    1987-08-01

    A series of tests was conducted at EPA's Combustion Research Facility (CRF) to investigate the fate of volatile trace elements in liquid-injection hazardous-waste incineration. In these tests, arsenic in the form of arsenic trioxide and antimony in the form of antimony trichloride were added to a methanol base containing varying amounts of chlorobenzene and carbon tetrachloride, and fired in the liquid-injection incinerator at the CRF. Test variables included incinerator temperature and excess air level, and feed chlorine content. Test results show a relatively even distribution of both elements between scrubber-exit flue gas and scrubber blowdown. Both elements are found in the vapor phase at high temperatures, though most condenses to particulate at scrubber exit temperatures. Designated POHC destruction and removal efficiency (DRE) ranged from 99.99 to 99.999% at the afterburner exit to 99.999 to 99.9999% in the scrubber-exit flue gas. Typical levels of common products of incomplete combustion were measured.

  17. Computational method for simulation of thermal load distribution in a lithographic lens.

    PubMed

    Yu, Xinfeng; Ni, Mingyang; Rui, Dawei; Qu, Yi; Zhang, Wei

    2016-05-20

    As a crucial step for thermal aberration prediction, thermal simulation is an effective way to acquire the temperature distribution of lenses. In the case of rigorous thermal simulation with the finite volume method, the amount of absorbed energy and its distribution within lens elements should be provided to guarantee simulation accuracy. In this paper, a computational method for simulation of thermal load distribution concerning lens material absorption was proposed based on light intensity of lens elements' surfaces. An algorithm for the verification of the method was also introduced, and the results showed that the method presented in this paper is an effective solution for thermal load distribution in a lithographic lens. PMID:27411148

  18. Prevalence, distribution, and molecular characterization of Salmonella recovered from swine finishing herds and a slaughter facility in Santa Catarina, Brazil

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Swine are a reservoir for Salmonella spp., and pork and pork products are vehicles of Salmonella infections. The objective of this investigation was to determine the distribution and types of Salmonella in 12 swine finishing herds and a slaughter facility in Santa Catarina, Brazil. A total of 1,258 ...

  19. Computer code for the calculation of the temperature distribution of cooled turbine blades

    NASA Astrophysics Data System (ADS)

    Tietz, Thomas A.; Koschel, Wolfgang W.

    A generalized computer code for the calculation of the temperature distribution in a cooled turbine blade is presented. Using an iterative procedure, this program especially allows the coupling of the aerothermodynamic values of the internal flow with the corresponding temperature distribution of the blade material. The temperature distribution of the turbine blade is calculated using a fully three-dimensional finite element computer code, so that the radial heat flux is taken into account. This code was extended to 4-node tetrahedral elements enabling an adaptive grid generation. To facilitate the mesh generation of the usually complex blade geometries, a computer program was developed, which performs the grid generation of blades having basically arbitrary shape on the basis of two-dimensional cuts. The performance of the code is demonstrated with reference to a typical cooling configuration of a modern turbine blade.

  20. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets.

    PubMed

    Plaku, Erion; Kavraki, Lydia E

    2007-03-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318

  1. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets

    PubMed Central

    Plaku, Erion; Kavraki, Lydia E.

    2009-01-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318

  2. An Efficient Algorithm for Stiffness Identification of Truss Structures Through Distributed Local Computation

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Burgueño, R.; Elvin, N. G.

    2010-02-01

    This paper presents an efficient stiffness identification technique for truss structures based on distributed local computation. Sensor nodes on each element are assumed to collect strain data and communicate only with sensors on neighboring elements. This can significantly reduce the energy demand for data transmission and the complexity of transmission protocols, thus enabling a simplified wireless implementation. Element stiffness parameters are identified by simple low order matrix inversion at a local level, which reduces the computational energy, allows for distributed computation and makes parallel data processing possible. The proposed method also permits addressing the problem of missing data or faulty sensors. Numerical examples, with and without missing data, are presented and the element stiffness parameters are accurately identified. The computation efficiency of the proposed method is n2 times higher than previously proposed global damage identification methods.

  3. Architecture for event-driven real-time distributed computer systems

    SciTech Connect

    McDonald, J.E.

    1983-01-01

    The author describes a proposed preliminary system design that includes hardware and software for real-time distributed computer systems. This new system is appropriate as a digital avionics architecture or as a real-time multi-computer simulation system using a mixture of computers, mainframes to micros. The hardware contains a network that employs high-speed serial data transmission concepts in emulating a multicomputer shared memory system. The distributed multicomputer system then capitalizes on the attributes of the hardware by structuring the real-time software as the data-driven input-output system. The real-time software executes only on demand and not synchronously as in conventional real-time systems. Background information concerning multi-computer systems using serial and parallel data transmission networks is given. This information supports the design rationale of the proposed hardware system which is basically a technology blend of conventional serial and parallel transmission schemes. 2 references.

  4. Advanced Distributed Measurements and Data Processing at the Vibro-Acoustic Test Facility, GRC Space Power Facility, Sandusky, Ohio - an Architecture and an Example

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.; Evans, Richard K.

    2009-01-01

    A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.

  5. Application of the TEMPEST computer code for simulating hydrogen distribution in model containment structures. [PWR; BWR

    SciTech Connect

    Trent, D.S.; Eyler, L.L.

    1982-09-01

    In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.

  6. Probing the structure of complex solids using a distributed computing approach-Applications in zeolite science

    SciTech Connect

    French, Samuel A.; Coates, Rosie; Lewis, Dewi W.; Catlow, C. Richard A.

    2011-06-15

    We demonstrate the viability of distributed computing techniques employing idle desktop computers in investigating complex structural problems in solids. Through the use of a combined Monte Carlo and energy minimisation method, we show how a large parameter space can be effectively scanned. By controlling the generation and running of different configurations through a database engine, we are able to not only analyse the data 'on the fly' but also direct the running of jobs and the algorithms for generating further structures. As an exemplar case, we probe the distribution of Al and extra-framework cations in the structure of the zeolite Mordenite. We compare our computed unit cells with experiment and find that whilst there is excellent correlation between computed and experimentally derived unit cell volumes, cation positioning and short-range Al ordering (i.e. near neighbour environment), there remains some discrepancy in the distribution of Al throughout the framework. We also show that stability-structure correlations only become apparent once a sufficiently large sample is used. - Graphical Abstract: Aluminium distributions in zeolites are determined using e-science methods. Highlights: > Use of e-science methods to search configurationally space. > Automated control of space searching. > Identify key structural features conveying stability. > Improved correlation of computed structures with experimental data.

  7. Experiences on integration of network management and a distributed computing platform

    NASA Astrophysics Data System (ADS)

    Rahkila, Sakari; Stenberg, Susanne

    1997-09-01

    The integration of the two recognized network management protocol standards, common management information protocol, and simple network management protocol, and common object request broker architecture (CORBA) technology, allows management applications to take advantage of distributed object computing as well as the standardized network management protocols. This paper describes the distributed computing platform (DCP) prototype developed at the Nokia Research Center. The DCP prototype is a framework, including tools, compilers and gateways, built to support both Internet and open systems interconnection management through a CORBA infrastructure.

  8. Implementation of the Distributed Parallel Program for Geoid Heights Computation Using MPI and Openmp

    NASA Astrophysics Data System (ADS)

    Lee, S.; Kim, J.; Jung, Y.; Choi, J.; Choi, C.

    2012-07-01

    Much research have been carried out using optimization algorithms for developing high-performance program, under the parallel computing environment with the evolution of the computer hardware technology such as dual-core processor and so on. Then, the studies by the parallel computing in geodesy and surveying fields are not so many. The present study aims to reduce running time for the geoid heights computation and carrying out least-squares collocation to improve its accuracy using distributed parallel technology. A distributed parallel program was developed in which a multi-core CPU-based PC cluster was adopted using MPI and OpenMP library. Geoid heights were calculated by the spherical harmonic analysis using the earth geopotential model of the National Geospatial-Intelligence Agency(2008). The geoid heights around the Korean Peninsula were calculated and tested in diskless-based PC cluster environment. As results, for the computing geoid heights by a earth geopotential model, the distributed parallel program was confirmed more effective to reduce the computational time compared to the sequential program.

  9. Distributed UHV system for the folded tandem ion accelerator facility at BARC

    NASA Astrophysics Data System (ADS)

    Gupta, S. K.; Agarwal, A.; Singh, S. K.; Basu, A.; P, Sapna; Sarode, S. P.; Singh, V. P.; Subrahmanyam, N. B. V.; Bhatt, J. P.; Pol, S. S.; Raut, P. J.; Ware, S. V.; Singh, P.; Choudhury, R. K.; Kailas, S.

    2008-05-01

    The 6 MV Folded Tandem Ion Accelerator (FOTIA) Facility at the Nuclear Physics Division, BARC is operational and accelerated beams of both light and heavy ions are being used extensively for basic and applied research. An average vacuum of the order of 10-8-10-9 Torr is maintained for maximum beam transmission and minimum beam energy spreads. The FOTIA vacuum system comprises of about 55 meter long, 100 mm diameter beam lines including various diagnostic devices, two accelerating tubes and four narrow vacuum chambers. The cross sections of the vacuum chambers are 14mm × 24mm for 180°, 38mm × 60mm and 19 × 44 mm for the and 70° & 90° bending magnets and Switching chambers respectively. All the beam line components are UHV compatible, fabricated from stainless steel 304L grade material fitted with metal gaskets. The total volume ~5.8 × 105 cm3 and surface area of 4.6 × 104 cm2, interspersed with total 18 pumping stations. The accelerating tubes are subjected to very high voltage gradient, 20.4 kV/cm, which requires a hydrocarbon free and clean vacuum for smooth operation of the accelerator. Vacuum interlocks are provided to various devices for safe operation of the accelerator. Specially designed sputter ion pumps for higher environmental pressure of 8 atmospheres are used to pump the accelerating tubes and the vacuum chamber for the 180° bending magnet. Fast acting valves are provided for isolating main accelerator against accidental air rush from rest of the beam lines. All the vacuum readings are displayed locally and are also available remotely through computer interface to the Control Room. Vacuum system details are described in this paper.

  10. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it

  11. Comparison of Lauritzen-Spiegelhalter and successive restrictions algorithms for computing probability distributions in Bayesian networks

    NASA Astrophysics Data System (ADS)

    Smail, Linda

    2016-06-01

    The basic task of any probabilistic inference system in Bayesian networks is computing the posterior probability distribution for a subset or subsets of random variables, given values or evidence for some other variables from the same Bayesian network. Many methods and algorithms have been developed to exact and approximate inference in Bayesian networks. This work compares two exact inference methods in Bayesian networks-Lauritzen-Spiegelhalter and the successive restrictions algorithm-from the perspective of computational efficiency. The two methods were applied for comparison to a Chest Clinic Bayesian Network. Results indicate that the successive restrictions algorithm shows more computational efficiency than the Lauritzen-Spiegelhalter algorithm.

  12. Feeding an astrophysical database via distributed computing resources: The case of BaSTI

    NASA Astrophysics Data System (ADS)

    Taffoni, G.; Sciacca, E.; Pietrinferni, A.; Becciani, U.; Costa, A.; Cassisi, S.; Pasian, F.; Pelusi, D.; Vuerli, C.

    2015-06-01

    Stellar evolution model databases, spanning a wide ranges of masses and initial chemical compositions, are nowadays a major tool to study Galactic and extragalactic stellar populations. The Bag of Stellar Tracks and Isochrones (BaSTI) database is a VO-compliant theoretical astrophysical catalogue that collects fundamental datasets involving stars formation and evolution. The creation of this database implies a large number of stellar evolutionary computations that are extremely demanding in term of computing power. Here we discuss the efforts devoted to create and update the database using Distributed Computing Infrastructures and a Science Gateway and its future developments within the framework of the Italian Virtual Observatory project.

  13. Computational and experimental physics performance characterization of the neutron capture therapy research facility at Washington State Univ

    SciTech Connect

    Nigg, D. W.; Sloan, P. E.; Venhuizen, J. R.; Wemple, C. A.; Tripard, G. E.; Fox, K.; Corwin, E.

    2006-07-01

    This paper summarizes the results of the final beam characterization measurements for a dual mode epithermal-thermal beam facility for neutron capture therapy research that was recently constructed at the Washington State Univ. TRIGA{sup TM} research reactor. The results show that the performance of the beam facility is consistent with the design computations and with international standards for the intended application. A useful epithermal neutron flux of 1.3 x 10{sup 9} n/cm{sup 2}-s is produced at the irradiation point with the beam in epithermal mode and shaped by a 10-cm circular aperture plate. When the beam is thermalized with approximately 34 cm of heavy water, the useful thermal flux at the irradiation point is approximately 3.5 x 10{sup 8} n/cm{sup 2}-s. The new WSU facility is one of only two such installations currently operating in the US. (authors)

  14. Real-time computer data system for the 40- by 80-foot wind tunnel facility at Ames Research Center

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Tolari, G. P.

    1975-01-01

    The background material and operational concepts of a computer-based system for an operating wind tunnel are described. An on-line real-time computer system was installed in a wind tunnel facility to gather static and dynamic data. The computer system monitored aerodynamic forces and moments of periodic and quasi-periodic functions, and displayed and plotted computed results in real time. The total system is comprised of several off-the-shelf, interconnected subsystems that are linked to a large data processing center. The system includes a central processor unit with 32,000 24-bit words of core memory, a number of standard peripherals, and several special processors; namely, a dynamic analysis subsystem, a 256-channel PCM-data subsystem and ground station, a 60-channel high-speed data acquisition subsystem, a communication link, and static force and pressure subsystems. The role of the test engineer as a vital link in the system is also described.

  15. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  16. Planning for distributed workflows: constraint-based coscheduling of computational jobs and data placement in distributed environments

    NASA Astrophysics Data System (ADS)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2015-05-01

    When running data intensive applications on distributed computational resources long I/O overheads may be observed as access to remotely stored data is performed. Latencies and bandwidth can become the major limiting factor for the overall computation performance and can reduce the CPU/WallTime ratio to excessive IO wait. Reusing the knowledge of our previous research, we propose a constraint programming based planner that schedules computational jobs and data placements (transfers) in a distributed environment in order to optimize resource utilization and reduce the overall processing completion time. The optimization is achieved by ensuring that none of the resources (network links, data storages and CPUs) are oversaturated at any moment of time and either (a) that the data is pre-placed at the site where the job runs or (b) that the jobs are scheduled where the data is already present. Such an approach eliminates the idle CPU cycles occurring when the job is waiting for the I/O from a remote site and would have wide application in the community. Our planner was evaluated and simulated based on data extracted from log files of batch and data management systems of the STAR experiment. The results of evaluation and estimation of performance improvements are discussed in this paper.

  17. Potential applications of artificial intelligence in computer-based management systems for mixed waste incinerator facility operation

    SciTech Connect

    Rivera, A.L.; Singh, S.P.N.; Ferrada, J.J.

    1991-01-01

    The Department of Energy/Oak Ridge Field Office (DOE/OR) operates a mixed waste incinerator facility at the Oak Ridge K-25 Site, designed for the thermal treatment of incinerable liquid, sludge, and solid waste regulated under the Toxic Substances Control Act (TSCA) and the Resource Conversion and Recovery Act (RCRA). Operation of the TSCA Incinerator is highly constrained as a result of the regulatory, institutional, technical, and resource availability requirements. This presents an opportunity for applying computer technology as a technical resource for mixed waste incinerator operation to facilitate promoting and sustaining a continuous performance improvement process while demonstrating compliance. This paper describes mixed waste incinerator facility performance-oriented tasks that could be assisted by Artificial Intelligence (AI) and the requirements for AI tools that would implement these algorithms in a computer-based system. 4 figs., 1 tab.

  18. School Facilities Funding and Capital-Outlay Distribution in the States

    ERIC Educational Resources Information Center

    Duncombe, William; Wang, Wen

    2009-01-01

    Traditionally, financing the construction of school facilities has been a local responsibility. In the past several decades, states have increased their support for school facilities. Using data collected from various sources, this study first classifies the design of capital aid programs in all 50 states into various categories based on the scope…

  19. Analysis and synthesis of distributed-lumped-active networks by digital computer

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.

  20. Sensitivity analysis for large-deflection and postbuckling responses on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for calculating sensitivity coefficients for the nonlinear large-deflection and postbuckling responses of laminated composite structures on distributed-memory parallel computers. The strategy is applicable to any message-passing distributed computational environment. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a parallel sparse equation solver based on a nested dissection (or multilevel substructuring) node ordering scheme; and (3) a multilevel parallel procedure for evaluating hierarchical sensitivity coefficients. The hierarchical sensitivity coefficients measure the sensitivity of the composite structure response to variations in three sets of interrelated parameters; namely, laminate, layer and micromechanical (fiber, matrix, and interface/interphase) parameters. The effectiveness of the strategy is assessed by performing hierarchical sensitivity analysis for the large-deflection and postbuckling responses of stiffened composite panels with cutouts on three distributed-memory computers. The panels are subjected to combined mechanical and thermal loads. The numerical studies presented demonstrate the advantages of the reduced basis technique for hierarchical sensitivity analysis on distributed-memory machines.

  1. A Survey of Knowledge Management Skills Acquisition in an Online Team-Based Distributed Computing Course

    ERIC Educational Resources Information Center

    Thomas, Jennifer D. E.

    2007-01-01

    This paper investigates students' perceptions of their acquisition of knowledge management skills, namely thinking and team-building skills, resulting from the integration of various resources and technologies into an entirely team-based, online upper level distributed computing (DC) information systems (IS) course. Results seem to indicate that…

  2. Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.

    ERIC Educational Resources Information Center

    Beltrametti, Monica; English, Will

    1994-01-01

    Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…

  3. Computer Graphics for Use in the Classroom to Illustrate Basic Concepts and Spatial Distributions.

    ERIC Educational Resources Information Center

    Smith, Alan D.

    The computer packages of PLOTALL, SYMAP, SURFACE II, QUSMO, QUSMO2, QUCRS, and QUTAB are commercially available plotting programs that provide aids for visualizing spatial distributed data and concepts. The incremental drum and line printer plots communicate often vast and difficult-to-interpret tabular data with or without geographic coordinates.…

  4. Challenges and Opportunities of Information Technology in the 90s. Track VIII: Managing Distributed Computing Services.

    ERIC Educational Resources Information Center

    CAUSE, Boulder, CO.

    Six papers from the 1990 CAUSE Conference Track VIII: Managing Distributed Computing are presented. Authors discuss the challenges and opportunities involved in providing user managers with direct access to institutional databases to support their decision making and planning activities. Papers and their authors are as follows: "Rendering an…

  5. Variable-Length Message Transmission for Distributed Loop Computer Networks (Part I).

    ERIC Educational Resources Information Center

    Reames, C. C.; Liu, M. T.

    An introduction to the problems of variable-length message transmission in distributed loop computer networks, with a summary of previous accomplishments in the area, begins this technically-oriented document. An improved technique, overcoming some of the inadequacies in presently used techniques, is proposed together with a conceptual model of…

  6. Videopaper/VICTER: A Production/Distribution System Using Television and Computers

    PubMed Central

    Van Son, L. George

    1983-01-01

    This paper describes two integrated parts of a media distribution system. The first is a method of producing non-print information in the form of videocassettes by health professionals and the second explains how those programs are indexed and retrieved using a computer program.

  7. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  8. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOEpatents

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  9. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    NASA Astrophysics Data System (ADS)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  10. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  11. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    NASA Astrophysics Data System (ADS)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  12. DISTRIBUTION COEFICIENTS (KD) GENERATED FROM A CORE SAMPLE COLLECTED FROM THE SALTSTONE DISPOSAL FACILITY

    SciTech Connect

    Almond, P.; Kaplan, D.

    2011-04-25

    Core samples originating from Vault 4, Cell E of the Saltstone Disposal Facility (SDF) were collected in September of 2008 (Hansen and Crawford 2009, Smith 2008) and sent to SRNL to measure chemical and physical properties of the material including visual uniformity, mineralogy, microstructure, density, porosity, distribution coefficients (K{sub d}), and chemical composition. Some data from these experiments have been reported (Cozzi and Duncan 2010). In this study, leaching experiments were conducted with a single core sample under conditions that are representative of saltstone performance. In separate experiments, reducing and oxidizing environments were targeted to obtain solubility and Kd values from the measurable species identified in the solid and aqueous leachate. This study was designed to provide insight into how readily species immobilized in saltstone will leach from the saltstone under oxidizing conditions simulating the edge of a saltstone monolith and under reducing conditions, targeting conditions within the saltstone monolith. Core samples were taken from saltstone poured in December of 2007 giving a cure time of nine months in the cell and a total of thirty months before leaching experiments began in June 2010. The saltstone from Vault 4, Cell E is comprised of blast furnace slag, class F fly ash, portland cement, and Deliquification, Dissolution, and Adjustment (DDA) Batch 2 salt solution. The salt solution was previously analyzed from a sample of Tank 50 salt solution and characterized in the 4QCY07 Waste Acceptance Criteria (WAC) report (Zeigler and Bibler 2009). Subsequent to Tank 50 analysis, additional solution was added to the tank solution from the Effluent Treatment Project as well as from inleakage from Tank 50 pump bearings (Cozzi and Duncan 2010). Core samples were taken from three locations and at three depths at each location using a two-inch diameter concrete coring bit (1-1, 1-2, 1-3; 2-1, 2-2, 2-3; 3-1, 3-2, 3-3) (Hansen and

  13. Evidence for complex, collective dynamics and emergent, distributed computation in plants.

    PubMed

    Peak, David; West, Jevin D; Messinger, Susanna M; Mott, Keith A

    2004-01-27

    It has been suggested that some biological processes are equivalent to computation, but quantitative evidence for that view is weak. Plants must solve the problem of adjusting stomatal apertures to allow sufficient CO(2) uptake for photosynthesis while preventing excessive water loss. Under some conditions, stomatal apertures become synchronized into patches that exhibit richly complicated dynamics, similar to behaviors found in cellular automata that perform computational tasks. Using sequences of chlorophyll fluorescence images from leaves of Xanthium strumarium L. (cocklebur), we quantified spatial and temporal correlations in stomatal dynamics. Our values are statistically indistinguishable from those of the same correlations found in the dynamics of automata that compute. These results are consistent with the proposition that a plant solves its optimal gas exchange problem through an emergent, distributed computation performed by its leaves. PMID:14732685

  14. TORAC User's Manual. A computer code for analyzing tornado-induced flow and material transport in nuclear facilities

    SciTech Connect

    Andrae, R.W.; Tang, P.K.; Martin, R.A.; Gregory, W.S.

    1985-05-01

    This manual describes the TORAC computer code, which can model tornado-induced flows, pressures, and material transport within structures. Future versions of this code will have improved analysis capabilities. In addition, it is part of a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. TORAC is directed toward the analysis of facility ventilation systems, including interconnected rooms and corridors. TORAC is an improved version of the TVENT computer code. In TORAC, blowers can be turned on and off and dampers can be controlled with an arbitrary time function. The material transport capability is very basic and includes convection, depletion, entrainment, and filtration of material. The input specifications for the code and a variety of sample problems are provided. 53 refs., 62 figs.

  15. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  16. CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research

    PubMed Central

    Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C.

    2014-01-01

    The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. PMID:24904400

  17. Certain irregularities in the use of computer facilities at Sandia Laboratory

    SciTech Connect

    Not Available

    1980-10-22

    This report concerns irregularities in the use of computer systems at Sandia Laboratories (Sandia) in Albuquerque, New Mexico. Our interest in this subject was triggered when we learned late last year that the Federal Bureau of Investigation (FBI) was planning to undertake an investigation into possible misuse of the computer systems at Sandia. That investigation, which was carried out with the assistance of our staff, disclosed that an employee of Sandia was apparently using the Sandia computer system to assist in running a bookmaking operation for local gamblers. As a result of that investigation, we decided to conduct a separate review of Sandia's computer systems to determine the extent of computer misuse at Sandia. We found that over 200 employees of Sandia had stored games, personal items, classified material, and otherwise sensitive material on their computer files.

  18. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  19. Lilith: A software framework for the rapid development of scalable tools for distributed computing

    SciTech Connect

    Gentile, A.C.; Evensky, D.A.; Armstrong, R.C.

    1997-12-31

    Lilith is a general purpose tool that provides a highly scalable, easy distribution of user code across a heterogeneous computing platform. By handling the details of code distribution and communication, such a framework allows for the rapid development of tools for the use and management of large distributed systems. This speed-up in development not only enables the easy creation of tools as needed but also facilitates the ultimate development of more refined, hard-coded tools as well. Lilith is written in Java, providing platform independence and further facilitating rapid tool development through Object reuse and ease of development. The authors present the user-involved objects in the Lilith Distributed Object System and the Lilith User API. They present an example of tool development, illustrating the user calls, and present results demonstrating Lilith`s scalability.

  20. Comparison of experimental and computational neutron spectroscopy at a 14 MeV neutron generator facility

    NASA Astrophysics Data System (ADS)

    Waller, Edward; Cousins, Tom; Desrosiers, Marc; Jones, Trevor; Buhr, Rob; Rambousky, Ronald

    2009-05-01

    At any neutron production facility, the energy spectrum at any meaningful distance from the target will be modified. For the case of a facility used to provide reference irradiations of electronics and other devices at various target-to-device distances it is important to have knowledge of these spectral modifications. In addition, it is desirable to have the ability to generate near real-time measurement capability. Advances in neutron metrology have made it possible to determine neutron energy spectra in real time to high levels of accuracy. This paper outlines a series of experimental measurements and theoretical calculations designed to quantify the scattering effects for a 14 MeV neutron generator facility, and makes appropriate recommendations for near real-time measurements of these fields.

  1. Computation of the temperature distribution in cooled radial inflow turbine guide vanes

    NASA Technical Reports Server (NTRS)

    Tabakoff, W.; Hosny, W.; Hamed, A.

    1977-01-01

    A two-dimensional finite-difference numerical technique is presented to determine the temperature distribution of an internally-cooled blade of radial turbine guide vanes. A simple convection cooling is assumed inside the guide vane. Such an arrangement results in relatively small cooling effectiveness at the leading edge and at the trailing edge. Heat transfer augmentation in these critical areas may be achieved by using impingement jets and film cooling. A computer program is written in Fortran IV for IBM 370/165 computer.

  2. The maintenance, distribution and development of biomedical computer software: an exercise in software engineering.

    PubMed

    Boston, R C; Granek, H; Sutton, N; Weber, K; Greif, P; Zech, L

    1986-06-01

    The growing reliance of biomedical investigations on computer software in almost all facets of their work places considerable emphasis on the need for the integrated management of the software. In order to efficiently develop, distribute, and maintain the software, tools are required which not only automate these tasks but also, wherever possible, 'semi-intelligently', alert their user to irregular situation. We describe an assortment of such tools routinely used in the management of the SAAM/CONSAM biokinetic software and illustrate their application. Furthermore, using these techniques we have presented some comparative performances of numerical integrators and of computer processors. PMID:3637127

  3. Distributed-Memory Computing With the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Cheatwood, F. McNeil

    1997-01-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA), a Navier-Stokes solver, has been modified for use in a parallel, distributed-memory environment using the Message-Passing Interface (MPI) standard. A standard domain decomposition strategy is used in which the computational domain is divided into subdomains with each subdomain assigned to a processor. Performance is examined on dedicated parallel machines and a network of desktop workstations. The effect of domain decomposition and frequency of boundary updates on performance and convergence is also examined for several realistic configurations and conditions typical of large-scale computational fluid dynamic analysis.

  4. Lilith: A Java framework for the development of scalable tools for high performance distributed computing platforms

    SciTech Connect

    Evensky, D.A.; Gentile, A.C.; Armstrong, R.C.

    1998-03-19

    Increasingly, high performance computing constitutes the use of very large heterogeneous clusters of machines. The use and maintenance of such clusters are subject to complexities of communication between the machines in a time efficient and secure manner. Lilith is a general purpose tool that provides a highly scalable, secure, and easy distribution of user code across a heterogeneous computing platform. By handling the details of code distribution and communication, such a framework allows for the rapid development of tools for the use and management of large distributed systems. Lilith is written in Java, taking advantage of Java`s unique features of loading and distributing code dynamically, its platform independence, its thread support, and its provision of graphical components to facilitate easy to use resultant tools. The authors describe the use of Lilith in a tool developed for the maintenance of the large distributed cluster at their institution and present details of the Lilith architecture and user API for the general user development of scalable tools.

  5. An incentive for coordination in a decentralised service chain with a Weibull lifetime distributed facility

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Fang; Yang, Gino K.; Yang, Chyn-Yng; Chu, Tu-Bin

    2013-10-01

    This article deals with a decentralised service chain consisting of a service provider and a facility owner. The revenue allocation and service price are, respectively, determined by the service provider and the facility owner in a non-cooperative manner. To model this decentralised operation, a Stackelberg game between the two parties is formulated. In the mathematical framework, the service system is assumed to be driven by Poisson customer arrivals and exponential service times. The most common log-linear service demand and Weibull facility lifetime are also adopted. Under these analytical conditions, the decentralised decisions in this game are investigated and then a unique optimal equilibrium is derived. Finally, a coordination mechanism is proposed to improve the efficiency of this decentralised system.

  6. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    SciTech Connect

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-09-05

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above.

  7. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  8. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    SciTech Connect

    Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  9. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  10. PUMMA: Parallel Universal Matrix Multiplication Algorithms on distributed memory concurrent computers

    SciTech Connect

    Choi, Jaeyoung; Walker, D.W.; Dongarra, J.J. |

    1993-08-01

    This paper describes the Parallel Universal Matrix Multiplication Algorithms (PUMMA) on distributed memory concurrent computers. The PUMMA package includes not only the non-transposed matrix multiplication routine C = A{center_dot}B, but also transposed multiplication routines C = A{sup T}{center_dot}B, C = A{center_dot}B{sup T}, and C = A{sup T}{center_dot}B{sup T}, for a block scattered data distribution. The routines perform efficiently for a wide range of processor configurations and block sizes. The PUMMA together provide the same functionality as the Level 3 BLAS routine xGEMM. Details of the parallel implementation of the routines are given, and results are presented for runs on the Intel Touchstone Delta computer.

  11. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  12. The design of a standard message passing interface for distributed memory concurrent computers

    SciTech Connect

    Walker, D.W.

    1993-10-01

    This paper presents an overview of MPI, a proposed standard message passing interface for MIMD distributed memory concurrent computers. The design of MPI has been a collective effort involving researchers in the United States and Europe from many organizations and institutions. MPI includes point-to-point and collective communication routines, as well as support for process groups, communication contexts, and application topologies. While making use of new ideas where appropriate, the MPI standard is based largely on current practice.

  13. Vibration suppression with approximate finite dimensional compensators for distributed systems: Computational methods and experimental results

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.; Wang, Yun

    1994-01-01

    Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.

  14. Detailed computational procedure for design of cascade blades with prescribed velocity distributions in compressible potential flows

    NASA Technical Reports Server (NTRS)

    Costello, George R; Cummings, Robert L; Sinnette, John T , Jr

    1952-01-01

    A detailed step-by-step computational outline is presented for the design of two-dimensional cascade blades having a prescribed velocity distribution on the blade in a potential flow of the usual compressible fluid. The outline is based on the assumption that the magnitude of the velocity in the flow of the usual compressible nonviscous fluid is proportional to the magnitude of the velocity in the flow of a compressible nonviscous fluid with linear pressure-volume relation.

  15. DISTRIBUTION OF TRACE ELEMENT EMISIONS FROM THE LIQUID INJECTION INCINERATOR COMBUSTION RESEARCH FACILITY

    EPA Science Inventory

    A series of tests was conducted at EPA's Combustion Research Facility (CRF) to investigate the fate of volatile trace elements in liquid injection hazardous waste incineration. In these tests, arsenic in the form of arsenic trioxide and antimony in the form of antimony trichlorid...

  16. Sources and distribution of polychlorinated terphenyls at a major US aeronautics research facility

    SciTech Connect

    Hale, R.C.; Enos, C.; Gallagher, K.

    1998-11-01

    High concentrations of an unusual, complex mixture of chlorinated compounds were discovered in sediments and oysters near a federal aeronautics facility during implementation of a pollutant screening protocol. The mixture was identified as Aroclor 5432, a polychlorinated terphenyl (PCT) formulation, produced in the US until 1972. PCTs, particularly low chlorinated mixtures, have rarely been reported in the environment, despite significant manufacture and usage. PCBs, and mercury were also detected in storm drain lines entering these outfalls. The lines received input from both storm water and research buildings. Historical hydraulic fluid leaks and in-service compressor fluids in some buildings contained PCTs and PCBs. Contaminated materials on-site were removed to minimize pollutant spread. Aroclor 5432 usage, most likely as compressor/hydraulic fluid additives, probably ended about ten years prior to its on-site detection, in terms of biological effects, intraperitoneal injection of fish with Aroclor 5432 induced cytochrome P-4501A (CYP1A) and ethoxyresorufin O-deethylase (EROD) activity to a similar degree as PCB Aroclor 1254 and to a greater extent than PCT Aroclor 5460. The presence of high concentrations of PCTs contributed to the facility being included on the National Priorities List. It subsequently became the first US federal facility to sign a Federal Facility Agreement, identifying cleanup responsibilities, prior to formal listing.

  17. A visualization tool for parallel and distributed computing using the Lilith framework

    SciTech Connect

    Gentile, A.C.; Evensky, D.A.; Wyckoff, P.

    1998-05-01

    The authors present a visualization tool for the monitoring and debugging of codes run in a parallel and distributed computing environment, called Lilith Lights. This tool can be used both for debugging parallel codes as well as for resource management of clusters. It was developed under Lilith, a framework for creating scalable software tools for distributed computing. The use of Lilith provides scalable, non-invasive debugging, as opposed to other commonly used software debugging and visualization tools. Furthermore, by implementing the visualization tool in software rather than in hardware (as available on some MPPs), Lilith Lights is easily transferable to other machines, and well adapted for use on distributed clusters of machines. The information provided in a clustered environment can further be used for resource management of the cluster. In this paper, they introduce Lilith Lights, discussing its use on the Computational Plant cluster at Sandia National Laboratories, show its design and development under the Lilith framework, and present metrics for resource use and performance.

  18. Population-based learning of load balancing policies for a distributed computer system

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; Wah, Benjamin W.

    1993-01-01

    Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.

  19. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  20. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    PubMed

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  1. A Role for Synaptic Input Distribution in a Dendritic Computation of Motion Direction in the Retina.

    PubMed

    Vlasits, Anna L; Morrie, Ryan D; Tran-Van-Minh, Alexandra; Bleckert, Adam; Gainer, Christian F; DiGregorio, David A; Feller, Marla B

    2016-03-16

    The starburst amacrine cell in the mouse retina presents an opportunity to examine the precise role of sensory input location on neuronal computations. Using visual receptive field mapping, glutamate uncaging, two-photon Ca(2+) imaging, and genetic labeling of putative synapses, we identify a unique arrangement of excitatory inputs and neurotransmitter release sites on starburst amacrine cell dendrites: the excitatory input distribution is skewed away from the release sites. By comparing computational simulations with Ca(2+) transients recorded near release sites, we show that this anatomical arrangement of inputs and outputs supports a dendritic mechanism for computing motion direction. Direction-selective Ca(2+) transients persist in the presence of a GABA-A receptor antagonist, though the directional tuning is reduced. These results indicate a synergistic interaction between dendritic and circuit mechanisms for generating direction selectivity in the starburst amacrine cell. PMID:26985724

  2. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  3. Targeting Atmospheric Simulation Algorithms for Large Distributed Memory GPU Accelerated Computers

    SciTech Connect

    Norman, Matthew R

    2013-01-01

    Computing platforms are increasingly moving to accelerated architectures, and here we deal particularly with GPUs. In [15], a method was developed for atmospheric simulation to improve efficiency on large distributed memory machines by reducing communication demand and increasing the time step. Here, we improve upon this method to further target GPU accelerated platforms by reducing GPU memory accesses, removing a synchronization point, and better clustering computations. The modification ran over two times faster in some cases even though more computations were required, demonstrating the merit of improving memory handling on the GPU. Furthermore, we discover that the modification also has a near 100% hit rate in fast on-chip L1 cache and discuss the reasons for this. In concluding, we remark on further potential improvements to GPU efficiency.

  4. System design and algorithmic development for computational steering in distributed environments

    SciTech Connect

    Wu, Qishi; Zhu, Mengxia; Gu, Yi; Rao, Nageswara S

    2010-03-01

    Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet.

  5. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  6. Effects of wind-energy facilities on breeding grassland bird distributions.

    PubMed

    Shaffer, Jill A; Buhl, Deborah A

    2016-02-01

    The contribution of renewable energy to meet worldwide demand continues to grow. Wind energy is one of the fastest growing renewable sectors, but new wind facilities are often placed in prime wildlife habitat. Long-term studies that incorporate a rigorous statistical design to evaluate the effects of wind facilities on wildlife are rare. We conducted a before-after-control-impact (BACI) assessment to determine if wind facilities placed in native mixed-grass prairies displaced breeding grassland birds. During 2003-2012, we monitored changes in bird density in 3 study areas in North Dakota and South Dakota (U.S.A.). We examined whether displacement or attraction occurred 1 year after construction (immediate effect) and the average displacement or attraction 2-5 years after construction (delayed effect). We tested for these effects overall and within distance bands of 100, 200, 300, and >300 m from turbines. We observed displacement for 7 of 9 species. One species was unaffected by wind facilities and one species exhibited attraction. Displacement and attraction generally occurred within 100 m and often extended up to 300 m. In a few instances, displacement extended beyond 300 m. Displacement and attraction occurred 1 year after construction and persisted at least 5 years. Our research provides a framework for applying a BACI design to displacement studies and highlights the erroneous conclusions that can be made without the benefit of adopting such a design. More broadly, species-specific behaviors can be used to inform management decisions about turbine placement and the potential impact to individual species. Additionally, the avoidance distance metrics we estimated can facilitate future development of models evaluating impacts of wind facilities under differing land-use scenarios. PMID:26213098

  7. Improving scientists' interaction with complex computational-visualization environments based on a distributed grid infrastructure.

    PubMed

    Kalawsky, R S; O'Brien, J; Coveney, P V

    2005-08-15

    The grid has the potential to transform collaborative scientific investigations through the use of closely coupled computational and visualization resources, which may be geographically distributed, in order to harness greater power than is available at a single site. Scientific applications to benefit from the grid include visualization, computational science, environmental modelling and medical imaging. Unfortunately, the diversity, scale and location of the required resources can present a dilemma for the scientific worker because of the complexity of the underlying technology. As the scale of the scientific problem under investigation increases so does the nature of the scientist's interaction with the supporting infrastructure. The increased distribution of people and resources within a grid-based environment can make resource sharing and collaborative interaction a critical factor to their success. Unless the technological barriers affecting user accessibility are reduced, there is a danger that the only scientists to benefit will be those with reasonably high levels of computer literacy. This paper examines a number of important human factors of user interaction with the grid and expresses this in the context of the science undertaken by RealityGrid, a project funded by the UK e-Science programme. Critical user interaction issues will also be highlighted by comparing grid computational steering with supervisory control systems for local and remote access to the scientific environment. Finally, implications for future grid developers will be discussed with a particular emphasis on how to improve the scientists' access to what will be an increasingly important resource. PMID:16099754

  8. Computer Education in Schools: The Distribution Model and the Integration Model in the Federal Republic of Germany.

    ERIC Educational Resources Information Center

    Frey, Karl

    This paper discusses two conflicting opinions on the role of computer education within the West German school curriculum, i.e., the opinion of the majority of the education ministers and administrators, who wish to see computer use distributed over as many school subjects as possible, and a minority of specialists in computer education who prefer…

  9. Advancing a distributed multi-scale computing framework for large-scale high-throughput discovery in materials science

    NASA Astrophysics Data System (ADS)

    Knap, J.; Spear, C. E.; Borodin, O.; Leiter, K. W.

    2015-10-01

    We describe the development of a large-scale high-throughput application for discovery in materials science. Our point of departure is a computational framework for distributed multi-scale computation. We augment the original framework with a specialized module whose role is to route evaluation requests needed by the high-throughput application to a collection of available computational resources. We evaluate the feasibility and performance of the resulting high-throughput computational framework by carrying out a high-throughput study of battery solvents. Our results indicate that distributed multi-scale computing, by virtue of its adaptive nature, is particularly well-suited for building high-throughput applications.

  10. Description and development of the means of a model experiment for load balancing in distributed computing systems

    NASA Astrophysics Data System (ADS)

    Nagiyev, A. E.; Sherstnyova, A. I.; Botygin, I. A.; Galanova, N. Y.

    2016-06-01

    The results of the statistical model experiments research of various load balancing algorithms in distributed computing systems are presented. Software tools were developed. These tools, which allow to create a virtual infrastructure of distributed computing system in accordance with the intended objective of the research focused on multi-agent and multithreaded data processing were developed. A diagram of the control processing of requests from the terminal devices, providing an effective dynamic horizontal scaling of computing power at peak loads, is proposed.

  11. Lilith: A software framework for the rapid development of scalable tools for distributed computing

    SciTech Connect

    Gentile, A.C.; Evensky, D.A.; Armstrong, R.C.

    1998-03-01

    Lilith is a general purpose framework, written in Java, that provides a highly scalable distribution of user code across a heterogeneous computing platform. By creation of suitable user code, the Lilith framework can be used for tool development. The scalable performance provided by Lilith is crucial to the development of effective tools for large distributed systems. Furthermore, since Lilith handles the details of code distribution and communication, the user code need focus primarily on the tool functionality, thus, greatly decreasing the time required for tool development. In this paper, the authors concentrate on the use of the Lilith framework to develop scalable tools. The authors review the functionality of Lilith and introduce a typical tool capitalizing on the features of the framework. They present new Objects directly involved with tool creation. They explain details of development and illustrate with an example. They present timing results demonstrating scalability.

  12. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic

  13. Laser performance operations model (LPOM): The computational system that automates the setup and performance analysis of the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Shaw, Michael; House, Ronald

    2015-02-01

    The National Ignition Facility (NIF) is a stadium-sized facility containing a 192-beam, 1.8 MJ, 500-TW, 351-nm laser system together with a 10-m diameter target chamber with room for many target diagnostics. NIF is the world's largest laser experimental system, providing a national center to study inertial confinement fusion and the physics of matter at extreme energy densities and pressures. A computational system, the Laser Performance Operations Model (LPOM) has been developed that automates the laser setup process, and accurately predict laser energetics. LPOM uses diagnostic feedback from previous NIF shots to maintain accurate energetics models (gains and losses), as well as links to operational databases to provide `as currently installed' optical layouts for each of the 192 NIF beamlines. LPOM deploys a fully integrated laser physics model, the Virtual Beamline (VBL), in its predictive calculations in order to meet the accuracy requirements of NIF experiments, and to provide the ability to determine the damage risk to optical elements throughout the laser chain. LPOM determines the settings of the injection laser system required to achieve the desired laser output, provides equipment protection, and determines the diagnostic setup. Additionally, LPOM provides real-time post shot data analysis and reporting for each NIF shot. The LPOM computation system is designed as a multi-host computational cluster (with 200 compute nodes, providing the capability to run full NIF simulations fully parallel) to meet the demands of both the controls systems within a shot cycle, and the NIF user community outside of a shot cycle.

  14. Sources and Distribution of Polychlorinated Terphenyls at a Major US Aeronautics Research Facility.

    PubMed

    HALE; ENOS; GALLAGHER

    1998-11-01

    / High concentrations of an unusual, complex mixture of chlorinated compounds were discovered in sediments and oysters near a federal aeronautics facility during implementation of a pollutant screening protocol. The mixture was identified as Aroclor 5432, a polychlorinated terphenyl (PCT) formulation, produced in the US until 1972. PCTs, particularly low chlorinated mixtures, have rarely been reported in the environment, despite significant manufacture and usage. Releases were traced to two outfalls. Creek sediments downstream of one contained concentrations as high as 200,000 |gmg/kg (dry weight basis); those in indigenous oysters reached 35,000 |gmg/kg, indicating significant bioavailability and bioaccumulation potential. Subsequent work showed that PCTs were widely disseminated in marsh grass, crabs, and fish. PCTs, PCBs, and mercury were also detected in storm drain lines entering these outfalls. The lines received input from both storm water and research buildings. Historical hydraulic fluid leaks and in-service compressor fluids in some buildings contained PCTs and PCBs. Contaminated materials on-site were removed to minimize pollutant spread. Aroclor 5432 usage, most likely as compressor/hydraulic fluid additives, probably ended about ten years prior to its on-site detection. In terms of biological effects, intraperitoneal injection of fish with Aroclor 5432 induced cytochrome P-4501A (CYP1A) and ethoxyresorufin O-deethylase (EROD) activity to a similar degree as PCB Aroclor 1254 and to a greater extent than PCT Aroclor 5460. The presence of high concentrations of PCTs contributed to the facility being included on the National Priorities List. It subsequently became the first US federal facility to sign a Federal Facility Agreement, identifying cleanup responsibilities, prior to formal listing.KEY WORDS: Polychlorinated terphenyls; Aroclor; Contaminated sediments; Hydraulic fluid; Enzyme induction; Polychlorinated biphenyls PMID:9732522

  15. SU-E-T-531: Performance Evaluation of Multithreaded Geant4 for Proton Therapy Dose Calculations in a High Performance Computing Facility

    SciTech Connect

    Shin, J; Coss, D; McMurry, J; Farr, J; Faddegon, B

    2014-06-01

    Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 1×1×1 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 5×5 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1, 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended.

  16. Determining collagen distribution in articular cartilage using contrast-enhanced micro-computed tomography

    PubMed Central

    Nieminen, H.J.; Ylitalo, T.; Karhula, S.; Suuronen, J.-P.; Kauppinen, S.; Serimaa, R.; Hæggström, E.; Pritzker, K.P.H.; Valkealahti, M.; Lehenkari, P.; Finnilä, M.; Saarakkala, S.

    2015-01-01

    Summary Objective Collagen distribution within articular cartilage (AC) is typically evaluated from histological sections, e.g., using collagen staining and light microscopy (LM). Unfortunately, all techniques based on histological sections are time-consuming, destructive, and without extraordinary effort, limited to two dimensions. This study investigates whether phosphotungstic acid (PTA) and phosphomolybdic acid (PMA), two collagen-specific markers and X-ray absorbers, could (1) produce contrast for AC X-ray imaging or (2) be used to detect collagen distribution within AC. Method We labeled equine AC samples with PTA or PMA and imaged them with micro-computed tomography (micro-CT) at pre-defined time points 0, 18, 36, 54, 72, 90, 180, 270 h during staining. The micro-CT image intensity was compared with collagen distributions obtained with a reference technique, i.e., Fourier-transform infrared imaging (FTIRI). The labeling time and contrast agent producing highest association (Pearson correlation, Bland–Altman analysis) between FTIRI collagen distribution and micro-CT -determined PTA distribution was selected for human AC. Results Both, PTA and PMA labeling permitted visualization of AC features using micro-CT in non-calcified cartilage. After labeling the samples for 36 h in PTA, the spatial distribution of X-ray attenuation correlated highly with the collagen distribution determined by FTIRI in both equine (mean ± S.D. of the Pearson correlation coefficients, r = 0.96 ± 0.03, n = 12) and human AC (r = 0.82 ± 0.15, n = 4). Conclusions PTA-induced X-ray attenuation is a potential marker for non-destructive detection of AC collagen distributions in 3D. This approach opens new possibilities in development of non-destructive 3D histopathological techniques for characterization of OA. PMID:26003951

  17. Myofiber angle distributions in the ovine left ventricle do not conform to computationally optimized predictions

    PubMed Central

    Ennis, Daniel B.; Nguyen, Tom C.; Riboh, Jonathan C.; Wigström, Lars; Harrington, Katherine B.; Daughters, George T.; Ingels, Neil B.; Miller, D. Craig

    2008-01-01

    Recent computational models of optimized left ventricular (LV) myofiber geometry that minimize the spatial variance in sarcomere length, stress, and ATP consumption have predicted that a midwall myofiber angle of 20° and transmural myofiber angle gradient of 140° from epicardium to endocardium is a functionally optimal LV myofiber geometry. In order to test the extent to which actual fiber angle distributions conform to this prediction, we measured local myofiber angles at an average of nine transmural depths in each of 32 sites (4 short-axis levels, 8 circumferentially distributed blocks in each level) in five normal ovine LVs. We found: 1) a mean midwall myofiber angle of −7° (SD 9), but with spatial heterogeneity (averaging 0° in the posterolateral and anterolateral wall near the papillary muscles, and −9° in all other regions); and 2) an average transmural gradient of 93° (SD 21), but with spatial heterogeneity (averaging a low of 51° in the basal posterior sector and a high of 130° in the mid-equatorial anterolateral sector). We conclude that midwall myofiber angles and transmural myofiber angle gradients in the ovine heart are regionally non-uniform and differ significantly from the predictions of present-day computationally optimized LV myofiber models. Myofiber geometry in the ovine heart may differ from other species, but model assumptions also underlie the discrepancy between experimental and computational results. To test of the predictive capability of the current computational model would we propose using an ovine specific LV geometry and comparing the computed myofiber orientations to those we report herein. PMID:18805536

  18. Assessing Tax Form Distribution Costs: A Proposed Method for Computing the Dollar Value of Tax Form Distribution in a Public Library.

    ERIC Educational Resources Information Center

    Casey, James B.

    1998-01-01

    Explains how a public library can compute the actual cost of distributing tax forms to the public by listing all direct and indirect costs and demonstrating the formulae and necessary computations. Supplies directions for calculating costs involved for all levels of staff as well as associated public relations efforts, space, and utility costs.…

  19. Specification and implementation of an integrated packet communication facility for an array computer

    SciTech Connect

    Rathi, B.D.; Deshpande, S.; Sejnowski, M.; Walker, D.; Jenevein, R.; Lipovski, G.J.; Browne, J.C.

    1983-01-01

    Four distinct packet communication requirements for network architectured computer systems are: system control, dataflow data type movement, SIMD, data realignment and movement of high volume data between MIMD configurations when memory sharing is unavailable or too costly. This paper defines and describes a packet switching mechanism which meets each of these requirements. Mechanisms are also defined and described for breaking and restoring SIMD execution structures which are required to complete the implementation of packet switching for SIMD execution. The mechanisms were defined and are described in the context of the Texas reconfigurable array computer (TRAC), but should be in large measure adaptable to other network architectured systems. 8 references.

  20. Contaminant distributions at typical U.S. uranium milling facilities and their effect on remedial action decisions

    SciTech Connect

    Hamp, S.; Jackson, T.J.; Dotson, P.W.

    1995-03-01

    Past operations at uranium processing sites throughout the US have resulted in local contamination of soils and ground water by radionuclides, toxic metals, or both. Understanding the origin of contamination and how the constituents are distributed is a basic element for planning remedial action decisions. This report describes the radiological and nonradiological species found in ground water at a typical US uranium milling facility. The report will provide the audience with an understanding of the vast spectrum of contaminants that must be controlled in planning solutions to the long-term management of these waste materials.

  1. Impact of uniform electrode current distribution on ETF. [Engineering Test Facility MHD generator

    NASA Technical Reports Server (NTRS)

    Bents, D. J.

    1982-01-01

    A basic reason for the complexity and sheer volume of electrode consolidation hardware in the MHD ETF Powertrain system is the channel electrode current distribution, which is non-uniform. If the channel design is altered to provide uniform electrode current distribution, the amount of hardware required decreases considerably, but at the possible expense of degraded channel performance. This paper explains the design impacts on the ETF electrode consolidation network associated with uniform channel electrode current distribution, and presents the alternate consolidation designs which occur. They are compared to the baseline (non-uniform current) design with respect to performance, and hardware requirements. A rational basis is presented for comparing the requirements for the different designs and the savings that result from uniform current distribution. Performance and cost impacts upon the combined cycle plant are discussed.

  2. Computer simulation of PPF distribution under blue and red LED light source for plant growth.

    PubMed

    Takita, S; Okamoto, K; Yanagi, T

    1996-12-01

    The superimposed pattern of "luminescence spectrum of blue light emitting diode (LED)" and "that of red LED", corresponds well to light absorption spectrum of chlorophyll. If these two kinds of LED are used as a light source, various plant cultivation experiments are possible. The cultivation experiments which use such light sources are becoming increasingly active, and in such experiments, it is very important to know the distribution of the photosynthetic photon flux (PPF) which exerts an important influence on photosynthesis. Therefore, we have developed a computer simulation system which can visualize the PPF distribution under a light source equipped with blue and red LEDs. In this system, an LED is assumed to be a point light source, and only the photons which are emitted directly from LED are considered. This simulation system can display a perspective view of the PPF distribution, a transverse and a longitudinal section of the distribution, and a contour map of the distribution. Moreover, a contour map of the ratio of the value of the PPF emitted by blue LEDs to that by blue and red LEDs can be displayed. As the representation is achieved by colored lines according to the magnitudes of the PPF in our system, a user can understand and evaluate the state of the PPF well. PMID:11541576

  3. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    SciTech Connect

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-04-09

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  4. Enabling 3D-Liver Perfusion Mapping from MR-DCE Imaging Using Distributed Computing

    PubMed Central

    Leporq, Benjamin; Camarasu-Pop, Sorina; Davila-Serrano, Eduardo E.; Pilleul, Frank; Beuf, Olivier

    2013-01-01

    An MR acquisition protocol and a processing method using distributed computing on the European Grid Infrastructure (EGI) to allow 3D liver perfusion parametric mapping after Magnetic Resonance Dynamic Contrast Enhanced (MR-DCE) imaging are presented. Seven patients (one healthy control and six with chronic liver diseases) were prospectively enrolled after liver biopsy. MR-dynamic acquisition was continuously performed in free-breathing during two minutes after simultaneous intravascular contrast agent (MS-325 blood pool agent) injection. Hepatic capillary system was modeled by a 3-parameters one-compartment pharmacokinetic model. The processing step was parallelized and executed on the EGI. It was modeled and implemented as a grid workflow using the Gwendia language and the MOTEUR workflow engine. Results showed good reproducibility in repeated processing on the grid. The results obtained from the grid were well correlated with ROI-based reference method ran locally on a personal computer. The speed-up range was 71 to 242 with an average value of 126. In conclusion, distributed computing applied to perfusion mapping brings significant speed-up to quantification step to be used for further clinical studies in a research context. Accuracy would be improved with higher image SNR accessible on the latest 3T MR systems available today. PMID:27006915

  5. The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

    ERIC Educational Resources Information Center

    DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

    2013-01-01

    Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

  6. CLINICAL SURFACES - Activity-Based Computing for Distributed Multi-Display Environments in Hospitals

    NASA Astrophysics Data System (ADS)

    Bardram, Jakob E.; Bunde-Pedersen, Jonathan; Doryab, Afsaneh; Sørensen, Steffen

    A multi-display environment (MDE) is made up of co-located and networked personal and public devices that form an integrated workspace enabling co-located group work. Traditionally, MDEs have, however, mainly been designed to support a single “smart room”, and have had little sense of the tasks and activities that the MDE is being used for. This paper presents a novel approach to support activity-based computing in distributed MDEs, where displays are physically distributed across a large building. CLINICAL SURFACES was designed for clinical work in hospitals, and enables context-sensitive retrieval and browsing of patient data on public displays. We present the design and implementation of CLINICAL SURFACES, and report from an evaluation of the system at a large hospital. The evaluation shows that using distributed public displays to support activity-based computing inside a hospital is very useful for clinical work, and that the apparent contradiction between maintaining privacy of medical data in a public display environment can be mitigated by the use of CLINICAL SURFACES.

  7. Computational Approaches to Analyze and Predict Small Molecule Transport and Distribution at Cellular and Subcellular Levels

    PubMed Central

    Ah Min, Kyoung; Zhang, Xinyuan; Yu, Jing-yu; Rosania, Gus R.

    2013-01-01

    Quantitative structure-activity relationship (QSAR) studies and mechanistic mathematical modeling approaches have been independently employed for analyzing and predicting the transport and distribution of small molecule chemical agents in living organisms. Both of these computational approaches have been useful to interpret experiments measuring the transport properties of small molecule chemical agents, in vitro and in vivo. Nevertheless, mechanistic cell-based pharmacokinetic models have been especially useful to guide the design of experiments probing the molecular pathways underlying small molecule transport phenomena. Unlike QSAR models, mechanistic models can be integrated from microscopic to macroscopic levels, to analyze the spatiotemporal dynamics of small molecule chemical agents from intracellular organelles to whole organs, well beyond the experiments and training data sets upon which the models are based. Based on differential equations, mechanistic models can also be integrated with other differential equations-based systems biology models of biochemical networks or signaling pathways. Although the origin and evolution of mathematical modeling approaches aimed at predicting drug transport and distribution has occurred independently from systems biology, we propose that the incorporation of mechanistic cell-based computational models of drug transport and distribution into a systems biology modeling framework is a logical next-step for the advancement of systems pharmacology research. PMID:24218242

  8. Linking and Combining Distributed Operations Facilities using NASA's "GMSEC" Systems Architectures

    NASA Technical Reports Server (NTRS)

    Smith, Danford; Grubb, Thomas; Esper, Jaime

    2008-01-01

    NASA's Goddard Mission Services Evolution Center (GMSEC) ground system architecture has been in development since late 2001, has successfully supported eight orbiting satellites and is being applied to many of NASA's future missions. GMSEC can be considered an event-driven service-oriented architecture built around a publish/subscribe message bus middleware. This paper briefly discusses the GMSEC technical approaches which have led to significant cost savings and risk reduction for NASA missions operated at the Goddard Space Flight Center (GSFC). The paper then focuses on the development and operational impacts of extending the architecture across multiple mission operations facilities.

  9. Distribution Coefficients (Kd Values) for Waste Resins Generated from the K and L Disassembly Basin Facilities

    SciTech Connect

    Kaplan, D.I.

    2002-12-02

    The objective of this study was to measure 14C, 129I, and 99Tc Kd values of spent resin generated from the K and L Disassembly Basin Facilities. The scope of the work was to conduct Kd measurements of resins combined in the ratio that they are disposed, 42:58 cation:anion. Because it was not known how these spent resins would be buried, it was necessary to measure the Kd values in such a manner as to simulate both trench and vault disposal. This was accomplished by using an acid-rain simulant (a standard U.S. Environmental Protection Agency protocol) and a cement leachate simulant .

  10. Are Equivalent Cross Sections the answer to the computational woes of Distributed Hydrologic Modelling?

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Khan, U.; Tuteja, N. K.; Ajami, H.

    2014-12-01

    Distributed modelling or conceptual hydrologic modelling - this is a dilemma that hydrologists have grappled with since long. While distributed hydro-ecological models are conceptually elegant and physically defensible, are they practical to apply given the significant computational burden they come at? One possible way of improving their computational efficiency is presented here. A new approach of modelling over an equivalent cross-section (ECS) is investigated. A homogenization test indicates that the representation of soil type is most critical in forming the ECS. If the soil type remains same within the sub-basin, a single ECS is formulated. If the soil type follows a specific pattern, i.e., different soil types near the centre of the river, middle of hillslope and ridge line, three ECSs (left bank, right bank and head water) are required. ECSs are formulated for 8 first order sub-basins and simulated using a 2-dimensional, Richards' equation based distributed hydrological model. Simulated fluxes are multiplied by the weighted area of each ECS to calculate the total fluxes from the sub-basins. To assess the accuracy of the ECS approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in fully distributed settings using the above model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. At the first order sub-basin scale, results show that the simulated fluxes using an ECS approach are very close to the reference fluxes and computational time is reduced of the order of ~4 to ~22 times compared to the fully distributed settings. Overall, the accuracy achieved in dominant fluxes, transpiration and soil evaporation, is higher than the other fluxes. Over a larger catchment with 822 sub-basins reasonable accuracy in simulated runoff against observed discharge is achieved. As a result, this

  11. Large-Scale Merging of Histograms using Distributed In-Memory Computing

    NASA Astrophysics Data System (ADS)

    Blomer, Jakob; Ganis, Gerardo

    2015-12-01

    Most high-energy physics analysis jobs are embarrassingly parallel except for the final merging of the output objects, which are typically histograms. Currently, the merging of output histograms scales badly. The running time for distributed merging depends not only on the overall number of bins but also on the number partial histogram output files. That means, while the time to analyze data decreases linearly with the number of worker nodes, the time to merge the histograms in fact increases with the number of worker nodes. On the grid, merging jobs that take a few hours are not unusual. In order to improve the situation, we present a distributed and decentral merging algorithm whose running time is independent of the number of worker nodes. We exploit full bisection bandwidth of local networks and we keep all intermediate results in memory. We present benchmarks from an implementation using the parallel ROOT facility (PROOF) and RAMCloud, a distributed key-value store that keeps all data in DRAM.

  12. Utilizing a Broadcast Quality Video Production Facility in a Distributed Education Environment

    ERIC Educational Resources Information Center

    Mainhart, Robert W.; Gerraughty, James F.

    2005-01-01

    The Distance Learning Prototype Lab (DLPL) at Saint Francis University's Center of Excellence for Remote and Medically Under-Served Areas (CERMUSA) was established in 1999 to explore and demonstrate how the merger of a variety of telecommunications technologies (video production, computer graphics, the Internet and teleconferencing) can improve…

  13. Astropulse: A search for microsecond transient radio signals using distributed computing

    NASA Astrophysics Data System (ADS)

    von Korff, Joshua Solomon

    I performed a transient, microsecond timescale radio sky survey, called "Astropulse," using the Arecibo telescope in Puerto Rico. Astropulse searches for brief (0.4 mus to 204.8 mus), wideband (relative to its 2.5 MHz bandwidth) radio pulses centered at 1,420 MHz, a range that includes the hyperfine hydrogen line. Astropulse is a commensal survey, obtaining its data by sharing telescope time with other surveys, such as PALFA. I scanned the sky visible to Arecibo, between declinations of --1.33 and 38.03 degrees, with varying dwell times depending on the requirements of our partner surveys. I analyzed 1,540 hours of data in each of 7 beams of the ALFA receiver, with 2 polarizations per beam, for a total of 21,600 hours of data. The data were 1-bit complex sampled at the Nyquist limit of 0.4 mus per sample. Examination of timescales less than 12.8 mus would have been impossible if not for my use of coherent dedispersion, a technique that has frequently been used for targeted observations, but has never before been associated with a radio sky survey. I performed nonlinear coherent dedispersion, reversing the broadening effects on signals caused by their passage through the interstellar medium (ISM). Coherent dedispersion requires intensive computations, and needs far more processing power than the more usual incoherent dedispersion. This processing power was provided by BOINC, the Berkeley Open Infrastructure for Network Computing. BOINC is a distributed computing system, which allowed me to utilize hundreds of thousands of volunteers' computers to perform the necessary calculations for coherent dedispersion. Each volunteer's computer requires about a week to process a single 8 MB "workunit," corresponding to 13 s of data from a single beam and polarization. In all, Astropulse analyzed over 48 TB of data. I did not aim to detect any particular astrophysical source, intending rather to perform a survey of the transient radio sky. Astrophysical events that might produce

  14. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    SciTech Connect

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and execute program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.

  15. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    PubMed

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-01

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions. PMID:27137018

  16. Distributed Feedback Fiber Laser The Heart of the National Ignition Facility

    SciTech Connect

    Browning, D F; Erbert, G V

    2003-12-01

    The National Ignition Facility (NIF) is a world-class laser fusion machine that is currently under construction at Lawrence Livermore National Laboratory (LLNL). The 192 laser beams that converge on the target at the output of the NIF laser system originate from a low power fiber laser in the Master Oscillator Room (MOR). The MOR is responsible for generating the single pulse that seeds the entire NIF laser system. This single pulse is phase-modulated to add bandwidth, and then amplified and split into 48 separate beam lines all in single-mode polarizing fiber. Before leaving the MOR, each of the 48 output beams are temporally sculpted into high contrast shapes using Arbitrary Waveform Generators. The 48 output beams of the MOR are amplified in the Preamplifier Modules (PAMs), split and amplified again to generate 192 laser beams. The 192 laser beams are frequency converted to the third harmonic and then focused at the center of a 10-meter diameter target chamber. The MOR is an all fiber-based system utilizing highly reliable Telecom-Industry type hardware. The nearly 2,000,000 joules of energy at the output of the NIF laser system starts from a single fiber oscillator that fits in the palm of your hand. This paper describes the design and performance of the laser source that provides the precision light to the National Ignition Facility. Shown below is a simplified diagram illustrating the MOR's basic functions.

  17. System Analysis for the Huntsville Operation Support Center, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Massey, D.

    1985-01-01

    HOSC as a distributed computing system, is responsible for data acquisition and analysis during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.

  18. Impact of Load Balancing on Unstructured Adaptive Grid Computations for Distributed-Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak; Simon, Horst D.

    1996-01-01

    The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.

  19. VLab: A Service Oriented Architecture for Distributed First Principles Materials Computations

    NASA Astrophysics Data System (ADS)

    da Silva, Cesar; da Silveira, Pedro; Wentzcovitch, Renata; Pierce, Marlon; Erlebacher, Gordon

    2008-03-01

    We present an overview of VLab, a system developed to handle execution of extensive workflows generated by first principles computations of thermoelastic properties of minerals. The multiplicity (10^2-3) of tasks derives from sampling of parameter space with variables such as pressure, temperature, strain, composition, etc. We review the algorithms of physical importance that define the system's requirements, its underlying service oriented architecture (SOA), and metadata. The system architecture emerges naturally. The SOA is a collection of web-services providing access to distributed computing nodes, workflow control, and monitoring services, and providing data analysis tools, visualization services, data bases, and authentication services. A usage view diagram is described. We also show snapshots taken from the actual operational procedure in VLab.

  20. System analysis for the Huntsville Operation Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.

    1986-01-01

    A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.

  1. Models the Electromagnetic Response of a 3D Distribution using MP COMPUTERS

    Energy Science and Technology Software Center (ESTSC)

    1999-05-01

    EM3D models the electromagnetic response of a 3D distribution of conductivity, dielectric permittivity and magnetic permeability within the earth for geophysical applications using massively parallel computers. The simulations are carried out in the frequency domain for either electric or magnetic sources for either scattered or total filed formulations of Maxwell''s equations. The solution is based on the method of finite differences and includes absorbing boundary conditions so that responses can be modeled up into themore » radar range where wave propagation is dominant. Recent upgrades in the software include the incorporation of finite size sources, that in addition to dipolar source fields, and a low induction number preconditioner that can significantly reduce computational run times. A graphical user interface (GUI) is bundled with the software so that complicated 3D models can be easily constructed and simulated with the software. The GUI also allows for plotting of the output.« less

  2. Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.

  3. Metaheuristic based scheduling meta-tasks in distributed heterogeneous computing systems.

    PubMed

    Izakian, Hesam; Abraham, Ajith; Snášel, Václav

    2009-01-01

    Scheduling is a key problem in distributed heterogeneous computing systems in order to benefit from the large computing capacity of such systems and is an NP-complete problem. In this paper, we present a metaheuristic technique, namely the Particle Swarm Optimization (PSO) algorithm, for this problem. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. The scheduler aims at minimizing makespan, which is the time when finishes the latest task. Experimental studies show that the proposed method is more efficient and surpasses those of reported PSO and GA approaches for this problem. PMID:22346701

  4. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  5. Measuring study time distributions: implications for designing computer-based courses.

    PubMed

    Taraban, R; Maki, W S; Rynearson, K

    1999-05-01

    In both traditional lecture-test courses and courses delivered over the World-Wide Web (WWW), both beginning and experienced college students reported studying almost exclusively just before exams. Automatic measures (computer records, WWW page hits, and electronic mail archives) confirmed the self-reported distributions of study times. Weekly deadlines produced weekly volleys of taking on-line quizzes, a pattern that was reflected in self-reports of study times. However, on-line study materials were used primarily for review for regularly scheduled in-class exams. Thus, regardless of course format, students engaged in massed practice and did not experience study aids at appropriate times. Computer technology provides new forms of learning for students, as well as opportunities for instructors to observe patterns of student study time. Management of instructional contingencies will be necessary to bring students into contact with the rich cognitive aids enabled by technology. PMID:10495808

  6. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  7. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    SciTech Connect

    Gallarno, George; Rogers, James H; Maxwell, Don E

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  8. Computer simulated building energy consumption for verification of energy conservation measures in network facilities

    NASA Technical Reports Server (NTRS)

    Plankey, B.

    1981-01-01

    A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.

  9. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results. PMID:24808576

  10. Running WRF on various distributed computing infrastructures through a standard-based Science Gateway

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; La Rocca, Giuseppe; Markussen Lunde, Torleif; Pehrson, Bjorn

    2014-05-01

    The Weather Research and Forecasting (WRF) modelling system is a widely used meso-scale numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. WRF has a large worldwide community counting more than 20,000 users in 130 countries and it has been specifically designed to be the state-of-the-art atmospheric simulation system being portable and running efficiently on available parallel computing platforms. Although WRF can be executed in many different environments ranging form the single core inside a stand-alone machine up to the most sophisticated HPC platforms, there are no solutions yet to match the e-Science paradigm where software, data and users are "linked" together by the network as components of distributed computing infrastructures. The topmost component of the typical e-Science model consists of Science Gateways, defined as community-developed sets of tools, applications, and data collections that normally are integrated via a portal to get access to a distributed infrastructure. One of the many available Science Gateway solutions is the Catania Science Gateway Framework (CSGF - www.catania-science-gateways.it) whose most descriptive keywords are: standard adoption, interoperability and standard adoption. The support of standards such as SAGA and SAML allows any CSGF user to seamlessly access and use both Grid and Cloud-based resources. In this work we present the CSGF and how it has been used in the context of the eI4frica project (www.ei4africa.eu) to implement the Africa Grid Science Gateway (http://sgw.africa-grid.org), which allows to execute WRF simulations on various kinds of distributed computing infrastructures at the same time, including the EGI Federated Cloud.

  11. Computing an NPMLE for a mixing distribution in two closed heterogeneous population size models.

    PubMed

    Mao, Chang Xuan

    2008-12-01

    Binomial and geometric mixtures can be used to model data gathered in capture-recapture surveys of animal populations, removal surveys of harvest populations, registrations of disease populations, ecological species census, and so on. To compute a nonparametric maximum likelihood estimator for the mixing distribution of heterogeneous capture probabilities, we consider a conditional approach and use a reliable and fast integrative procedure which combines the EM algorithm to increase the likelihood and the vertex-exchange method to update the number of support points. A convergent Newtonian algorithm is used in the M-step of the EM algorithm. PMID:18821726

  12. Bias, variance and computational properties of Kijko's estimators of the upper limit of magnitude distribution, Mmax

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanisław; Urban, Paweł

    2011-08-01

    It is often assumed in probabilistic seismic hazard analysis that the magnitude distribution has an upper limit M max, which indicates a limitation on event size in specific seismogeneic conditions. Accurate estimation of M max from an earthquake catalog is a matter of utmost importance. We compare bias, dispersion and computational properties of four popular M max estimators, introduced by Kijko and others (e.g., Kijko and Sellevoll 1989, Kijko and Graham 1998, Kijko 2004) and we recommend the ones which can be the most fruitful in practical applications. We provide nomograms for evaluation of bias and standard deviation of the recommended estimators for combinations of sample sizes and distribution parameters. We suggest to use the bias nomograms to correct the M max estimates. The nomograms of standard deviation can be used to determine minimum sample size for a required accuracy of M max.

  13. Novel two-to-three hard hadronic processes and possible studies of generalized parton distributions at hadron facilities

    NASA Astrophysics Data System (ADS)

    Kumano, S.; Strikman, M.; Sudoh, K.

    2009-10-01

    We consider a novel class of hard branching hadronic processes a+b→c+d+e, where hadrons c and d have large and nearly opposite transverse momenta and large invariant energy, which is a finite fraction of the total invariant energy. We use color transparency logic to argue that these processes can be used to study quark generalized parton distributions (GPDs) for baryons and mesons in hadron collisions, hence complementing and adding to the studies of GPDs in the exclusive deep inelastic scattering processes. We propose that a number of GPDs can be investigated in hadron facilities such as Japan Proton Accelerator Research Complex facility and Gesellschaft für Schwerionenforschung -Facility for Antiproton and Ion Research project. In this work, the GPDs for the nucleon and for the N→Δ transition are studied in the reaction N+N→N+π+B, where N, π, and B are a nucleon, a pion, and a baryon (nucleon or Δ), respectively, with a large momentum transfer between B (or π) and the incident nucleon. In particular, the Efremov-Radyushkin-Brodsky-Lepage region of the GPDs can be measured in such exclusive reactions. We estimate the cross section of the processes N+N→N+π+B by using current models for relevant GPDs and information about large angle πN reactions. We find that it will be feasible to measure these cross sections at the high-energy hadron facilities and to get novel information about the nucleon structure, for example, contributions of quark orbital angular momenta to the nucleon spin. The studies of N→Δ transition GPDs could be valuable also for investigating electromagnetic properties of the transition.

  14. Novel two-to-three hard hadronic processes and possible studies of generalized parton distributions at hadron facilities

    SciTech Connect

    Kumano, S.; Strikman, M.; Sudoh, K.

    2009-10-01

    We consider a novel class of hard branching hadronic processes a+b{yields}c+d+e, where hadrons c and d have large and nearly opposite transverse momenta and large invariant energy, which is a finite fraction of the total invariant energy. We use color transparency logic to argue that these processes can be used to study quark generalized parton distributions (GPDs) for baryons and mesons in hadron collisions, hence complementing and adding to the studies of GPDs in the exclusive deep inelastic scattering processes. We propose that a number of GPDs can be investigated in hadron facilities such as Japan Proton Accelerator Research Complex facility and Gesellschaft fuer Schwerionenforschung -Facility for Antiproton and Ion Research project. In this work, the GPDs for the nucleon and for the N{yields}{delta} transition are studied in the reaction N+N{yields}N+{pi}+B, where N, {pi}, and B are a nucleon, a pion, and a baryon (nucleon or {delta}), respectively, with a large momentum transfer between B (or {pi}) and the incident nucleon. In particular, the Efremov-Radyushkin-Brodsky-Lepage region of the GPDs can be measured in such exclusive reactions. We estimate the cross section of the processes N+N{yields}N+{pi}+B by using current models for relevant GPDs and information about large angle {pi}N reactions. We find that it will be feasible to measure these cross sections at the high-energy hadron facilities and to get novel information about the nucleon structure, for example, contributions of quark orbital angular momenta to the nucleon spin. The studies of N{yields}{delta} transition GPDs could be valuable also for investigating electromagnetic properties of the transition.

  15. Second order blended multidimensional upwind residual distribution scheme for steady and unsteady computations

    NASA Astrophysics Data System (ADS)

    Dobes, Jiri; Deconinck, Herman

    2008-06-01

    Multidimensional upwind residual distribution (RD) schemes have become an appealing alternative to more widespread finite volume and finite element methods (FEM) for solving compressible fluid flows. The RD approach allows to construct nonlinear second order and non-oscillatory methods at the same time. They are routinely used for steady state calculations of the complex flow problems, e.g., 3D turbulent transonic industrial-type simulations [H. Deconinck, K. Sermeus, R. Abgrall, Status of multidimensional upwind residual distribution schemes and applications in aeronautics, AAIA Paper 2000-2328, AIAA, 2000; K. Sermeus, H. Deconinck, Drag prediction validation of a multi-dimensional upwind solver, CFD-based aircraft drag prediction and reduction, VKI Lecture Series 2003-02, Von Karman Institute for Fluid Dynamics, Chausee do Waterloo 72, B-1640 Rhode Saint Genese, Belgium, 2003]. Despite its maturity, some problems are still present for the nonlinear schemes developed up to now: namely a poor iterative convergence for the transonic problems and a decrease of accuracy in smooth parts of the flow, caused by a weak L2 instability [M. Ricchiuto, Construction and analysis of compact residual discretizations for conservation laws on unstructured meshes. Ph.DE Thesis, Universite Libre de Bruxelles, Von Karman Institute for Fluid Dynamics, 2005]. We have developed a new formulation of a blended scheme between the second order linear LDA [R. Abgrall, M. Mezine, Residual distribution scheme for steady problems, 33rd Computational Fluid Dynamics course, VKI Lecture Series 2003-05, Von Karman Institute for Fluid Dynamics, Chausee do Waterloo 72, B-1640 Rhode Saint Genese, Belgium, 2003] scheme and the first order N scheme. The blending coefficient is based on a simple shock capturing operator and it is properly scaled such that second order accuracy is preserved. The approach is extended to unsteady flows problems using consistent formulation of the LDA scheme with the mass

  16. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  17. Temperature Distribution Within a Defect-Free Silicon Carbide Diode Predicted by a Computational Model

    NASA Technical Reports Server (NTRS)

    Kuczmarski, Maria A.; Neudeck, Philip G.

    2000-01-01

    Most solid-state electronic devices diodes, transistors, and integrated circuits are based on silicon. Although this material works well for many applications, its properties limit its ability to function under extreme high-temperature or high-power operating conditions. Silicon carbide (SiC), with its desirable physical properties, could someday replace silicon for these types of applications. A major roadblock to realizing this potential is the quality of SiC material that can currently be produced. Semiconductors require very uniform, high-quality material, and commercially available SiC tends to suffer from defects in the crystalline structure that have largely been eliminated in silicon. In some power circuits, these defects can focus energy into an extremely small area, leading to overheating that can damage the device. In an effort to better understand the way that these defects affect the electrical performance and reliability of an SiC device in a power circuit, the NASA Glenn Research Center at Lewis Field began an in-house three-dimensional computational modeling effort. The goal is to predict the temperature distributions within a SiC diode structure subjected to the various transient overvoltage breakdown stresses that occur in power management circuits. A commercial computational fluid dynamics computer program (FLUENT-Fluent, Inc., Lebanon, New Hampshire) was used to build a model of a defect-free SiC diode and generate a computational mesh. A typical breakdown power density was applied over 0.5 msec in a heated layer at the junction between the p-type SiC and n-type SiC, and the temperature distribution throughout the diode was then calculated. The peak temperature extracted from the computational model agreed well (within 6 percent) with previous first-order calculations of the maximum expected temperature at the end of the breakdown pulse. This level of agreement is excellent for a model of this type and indicates that three

  18. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  19. An Equivalent cross-section Framework for improving computational efficiency in Distributed Hydrologic Modelling

    NASA Astrophysics Data System (ADS)

    Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish

    2014-05-01

    While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess

  20. The impact of CFD on development test facilities - A National Research Council projection. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Korkegi, R. H.

    1983-01-01

    The results of a National Research Council study on the effect that advances in computational fluid dynamics (CFD) will have on conventional aeronautical ground testing are reported. Current CFD capabilities include the depiction of linearized inviscid flows and a boundary layer, initial use of Euler coordinates using supercomputers to automatically generate a grid, research and development on Reynolds-averaged Navier-Stokes (N-S) equations, and preliminary research on solutions to the full N-S equations. Improvements in the range of CFD usage is dependent on the development of more powerful supercomputers, exceeding even the projected abilities of the NASA Numerical Aerodynamic Simulator (1 BFLOP/sec). Full representation of the Re-averaged N-S equations will require over one million grid points, a computing level predicted to be available in 15 yr. Present capabilities allow identification of data anomalies, confirmation of data accuracy, and adequateness of model design in wind tunnel trials. Account can be taken of the wall effects and the Re in any flight regime during simulation. CFD can actually be more accurate than instrumented tests, since all points in a flow can be modeled with CFD, while they cannot all be monitored with instrumentation in a wind tunnel.