Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...
2017-10-01
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
NASA Astrophysics Data System (ADS)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea
2017-10-01
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.
Methods and systems for providing reconfigurable and recoverable computing resources
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.
Optimum spaceborne computer system design by simulation
NASA Technical Reports Server (NTRS)
Williams, T.; Kerner, H.; Weatherbee, J. E.; Taylor, D. S.; Hodges, B.
1973-01-01
A deterministic simulator is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Its use as a tool to study and determine the minimum computer system configuration necessary to satisfy the on-board computational requirements of a typical mission is presented. The paper describes how the computer system configuration is determined in order to satisfy the data processing demand of the various shuttle booster subsytems. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources.
NASA Astrophysics Data System (ADS)
Aneri, Parikh; Sumathy, S.
2017-11-01
Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.
Yu, Dantong; Katramatos, Dimitrios; Sim, Alexander; Shoshani, Arie
2014-04-22
A cross-domain network resource reservation scheduler configured to schedule a path from at least one end-site includes a management plane device configured to monitor and provide information representing at least one of functionality, performance, faults, and fault recovery associated with a network resource; a control plane device configured to at least one of schedule the network resource, provision local area network quality of service, provision local area network bandwidth, and provision wide area network bandwidth; and a service plane device configured to interface with the control plane device to reserve the network resource based on a reservation request and the information from the management plane device. Corresponding methods and computer-readable medium are also disclosed.
Optimum spaceborne computer system design by simulation
NASA Technical Reports Server (NTRS)
Williams, T.; Weatherbee, J. E.; Taylor, D. S.
1972-01-01
A deterministic digital simulation model is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Use of the model as a tool in configuring a minimum computer system for a typical mission is demonstrated. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources, i.e., the configuration derived is a minimal one. Other considerations such as increased reliability through the use of standby spares would be taken into account in the definition of a practical system for a given mission.
Using Mosix for Wide-Area Compuational Resources
Maddox, Brian G.
2004-01-01
One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Enabling opportunistic resources for CMS Computing Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hufnagel, Dirk
With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less
Enabling opportunistic resources for CMS Computing Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hufnagel, Dick
With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are usedmore » to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less
Enabling opportunistic resources for CMS Computing Operations
Hufnagel, Dirk
2015-12-23
With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less
AGIS: Integration of new technologies used in ATLAS Distributed Computing
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria
2017-10-01
The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.
NASA Technical Reports Server (NTRS)
Kennedy, J. R.; Fitzpatrick, W. S.
1971-01-01
The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.
Shared-resource computing for small research labs.
Ackerman, M J
1982-04-01
A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.
NASA Astrophysics Data System (ADS)
Xu, Boyi; Xu, Li Da; Fei, Xiang; Jiang, Lihong; Cai, Hongming; Wang, Shuai
2017-08-01
Facing the rapidly changing business environments, implementation of flexible business process is crucial, but difficult especially in data-intensive application areas. This study aims to provide scalable and easily accessible information resources to leverage business process management. In this article, with a resource-oriented approach, enterprise data resources are represented as data-centric Web services, grouped on-demand of business requirement and configured dynamically to adapt to changing business processes. First, a configurable architecture CIRPA involving information resource pool is proposed to act as a scalable and dynamic platform to virtualise enterprise information resources as data-centric Web services. By exposing data-centric resources as REST services in larger granularities, tenant-isolated information resources could be accessed in business process execution. Second, dynamic information resource pool is designed to fulfil configurable and on-demand data accessing in business process execution. CIRPA also isolates transaction data from business process while supporting diverse business processes composition. Finally, a case study of using our method in logistics application shows that CIRPA provides an enhanced performance both in static service encapsulation and dynamic service execution in cloud computing environment.
NASA Astrophysics Data System (ADS)
Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.
2011-12-01
WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
Task Assignment Heuristics for Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, N.; Djomehri, M. J.; Biswas, R.; Biegel, Bryan (Technical Monitor)
2001-01-01
CFD applications require high-performance computational platforms: 1. Complex physics and domain configuration demand strongly coupled solutions; 2. Applications are CPU and memory intensive; and 3. Huge resource requirements can only be satisfied by teraflop-scale machines or distributed computing.
AGIS: The ATLAS Grid Information System
NASA Astrophysics Data System (ADS)
Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration
2014-06-01
ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.
Pilots 2.0: DIRAC pilots for all the skies
NASA Astrophysics Data System (ADS)
Stagni, F.; Tsaregorodtsev, A.; McNab, A.; Luzzi, C.
2015-12-01
In the last few years, new types of computing infrastructures, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are opportunistic. Most of these new infrastructures are based on virtualization techniques. Meanwhile, some concepts, such as distributed queues, lost appeal, while still supporting a vast amount of resources. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to hide the diversity of underlying resources has become essential. The DIRAC WMS is based on the concept of pilot jobs that was introduced back in 2004. A pilot is what creates the possibility to run jobs on a worker node. Within DIRAC, we developed a new generation of pilot jobs, that we dubbed Pilots 2.0. Pilots 2.0 are not tied to a specific infrastructure; rather they are generic, fully configurable and extendible pilots. A Pilot 2.0 can be sent, as a script to be run, or it can be fetched from a remote location. A pilot 2.0 can run on every computing resource, e.g.: on CREAM Computing elements, on DIRAC Computing elements, on Virtual Machines as part of the contextualization script, or IAAC resources, provided that these machines are properly configured, hiding all the details of the Worker Nodes (WNs) infrastructure. Pilots 2.0 can be generated server and client side. Pilots 2.0 are the “pilots to fly in all the skies”, aiming at easy use of computing power, in whatever form it is presented. Another aim is the unification and simplification of the monitoring infrastructure for all kinds of computing resources, by using pilots as a network of distributed sensors coordinated by a central resource monitoring system. Pilots 2.0 have been developed using the command pattern. VOs using DIRAC can tune pilots 2.0 as they need, and extend or replace each and every pilot command in an easy way. In this paper we describe how Pilots 2.0 work with distributed and heterogeneous resources providing the necessary abstraction to deal with different kind of computing resources.
Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"
ERIC Educational Resources Information Center
Romiszowski, Alexander J.
2012-01-01
"Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…
Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulakhe, D.; Rodriguez, A.; Wilde, M.
2008-03-01
Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less
Online production validation in a HEP environment
NASA Astrophysics Data System (ADS)
Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.
2017-03-01
In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.
AGIS: The ATLAS Grid Information System
NASA Astrophysics Data System (ADS)
Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander
2012-12-01
ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.
Surfer: An Extensible Pull-Based Framework for Resource Selection and Ranking
NASA Technical Reports Server (NTRS)
Zolano, Paul Z.
2004-01-01
Grid computing aims to connect large numbers of geographically and organizationally distributed resources to increase computational power; resource utilization, and resource accessibility. In order to effectively utilize grids, users need to be connected to the best available resources at any given time. As grids are in constant flux, users cannot be expected to keep up with the configuration and status of the grid, thus they must be provided with automatic resource brokering for selecting and ranking resources meeting constraints and preferences they specify. This paper presents a new OGSI-compliant resource selection and ranking framework called Surfer that has been implemented as part of NASA's Information Power Grid (IPG) project. Surfer is highly extensible and may be integrated into any grid environment by adding information providers knowledgeable about that environment.
Squid - a simple bioinformatics grid.
Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M
2005-08-03
BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-01-01
Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313
Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-06-01
Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
NASA Astrophysics Data System (ADS)
Pierce, S. A.
2017-12-01
Decision making for groundwater systems is becoming increasingly important, as shifting water demands increasingly impact aquifers. As buffer systems, aquifers provide room for resilient responses and augment the actual timeframe for hydrological response. Yet the pace impacts, climate shifts, and degradation of water resources is accelerating. To meet these new drivers, groundwater science is transitioning toward the emerging field of Integrated Water Resources Management, or IWRM. IWRM incorporates a broad array of dimensions, methods, and tools to address problems that tend to be complex. Computational tools and accessible cyberinfrastructure (CI) are needed to cross the chasm between science and society. Fortunately cloud computing environments, such as the new Jetstream system, are evolving rapidly. While still targeting scientific user groups systems such as, Jetstream, offer configurable cyberinfrastructure to enable interactive computing and data analysis resources on demand. The web-based interfaces allow researchers to rapidly customize virtual machines, modify computing architecture and increase the usability and access for broader audiences to advanced compute environments. The result enables dexterous configurations and opening up opportunities for IWRM modelers to expand the reach of analyses, number of case studies, and quality of engagement with stakeholders and decision makers. The acute need to identify improved IWRM solutions paired with advanced computational resources refocuses the attention of IWRM researchers on applications, workflows, and intelligent systems that are capable of accelerating progress. IWRM must address key drivers of community concern, implement transdisciplinary methodologies, adapt and apply decision support tools in order to effectively support decisions about groundwater resource management. This presentation will provide an overview of advanced computing services in the cloud using integrated groundwater management case studies to highlight how Cloud CI streamlines the process for setting up an interactive decision support system. Moreover, advances in artificial intelligence offer new techniques for old problems from integrating data to adaptive sensing or from interactive dashboards to optimizing multi-attribute problems. The combination of scientific expertise, flexible cloud computing solutions, and intelligent systems opens new research horizons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-05-13
STONIX is a program for configuring UNIX and Linux computer operating systems. It applies configurations based on the guidance from publicly accessible resources such as: NSA Guides, DISA STIGs, the Center for Internet Security (CIS), USGCB and vendor security documentation. STONIX is written in the Python programming language using the QT4 and PyQT4 libraries to provide a GUI. The code is designed to be easily extensible and customizable.
Jaschob, Daniel; Riffle, Michael
2012-07-30
Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.
A data management system to enable urgent natural disaster computing
NASA Astrophysics Data System (ADS)
Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton
2014-05-01
Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe consequences Hard deadline: Missing a hard deadline renders the computation useless and results in full catastrophic consequences. A prototype of this system has a REST-based service manager. The REST-based implementation provides a uniform interface that is easy to use. New and upcoming file transfer protocols can easily be extended and accessed via the service manager. The service manager interacts with the other four managers to coordinate the data activities so that the fundamental natural disaster urgent computing requirement, i.e. deadline, can be fulfilled in a reliable manner. A data activity can include data storing, data archiving and data storing. Reliability is ensured by the choice of a network of managers organisation model[1] the configuration manager and the fault tolerance manager. With this proposed design, an easy to use, resource-independent data management system that can support and fulfill the computation of a natural disaster prediction within stipulated deadlines can thus be realised. References [1] H. G. Hegering, S. Abeck, and B. Neumair, Integrated management of networked systems - concepts, architectures, and their operational application, Morgan Kaufmann Publishers, 340 Pine Stret, Sixth Floor, San Francisco, CA 94104-3205, USA, 1999. [2] H. Kopetz, Real-time systems design principles for distributed embedded applications, second edition, Springer, LLC, 233 Spring Street, New York, NY 10013, USA, 2011. [3] S. H. Leong, A. Frank, and D. Kranzlmu¨ ller, Leveraging e-infrastructures for urgent computing, Procedia Computer Science 18 (2013), no. 0, 2177 - 2186, 2013 International Conference on Computational Science. [4] N. Trebon, Enabling urgent computing within the existing distributed computing infrastructure, Ph.D. thesis, University of Chicago, August 2011, http://people.cs.uchicago.edu/~ntrebon/docs/dissertation.pdf.
Optimizing Resource Utilization in Grid Batch Systems
NASA Astrophysics Data System (ADS)
Gellrich, Andreas
2012-12-01
On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.
Performance Evaluation of Resource Management in Cloud Computing Environments.
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.
Performance Evaluation of Resource Management in Cloud Computing Environments
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730
AGIS: Evolution of Distributed Computing information system for ATLAS
NASA Astrophysics Data System (ADS)
Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.
2015-12-01
ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.
Climate@Home: Crowdsourcing Climate Change Research
NASA Astrophysics Data System (ADS)
Xu, C.; Yang, C.; Li, J.; Sun, M.; Bambacus, M.
2011-12-01
Climate change deeply impacts human wellbeing. Significant amounts of resources have been invested in building super-computers that are capable of running advanced climate models, which help scientists understand climate change mechanisms, and predict its trend. Although climate change influences all human beings, the general public is largely excluded from the research. On the other hand, scientists are eagerly seeking communication mediums for effectively enlightening the public on climate change and its consequences. The Climate@Home project is devoted to connect the two ends with an innovative solution: crowdsourcing climate computing to the general public by harvesting volunteered computing resources from the participants. A distributed web-based computing platform will be built to support climate computing, and the general public can 'plug-in' their personal computers to participate in the research. People contribute the spare computing power of their computers to run a computer model, which is used by scientists to predict climate change. Traditionally, only super-computers could handle such a large computing processing load. By orchestrating massive amounts of personal computers to perform atomized data processing tasks, investments on new super-computers, energy consumed by super-computers, and carbon release from super-computers are reduced. Meanwhile, the platform forms a social network of climate researchers and the general public, which may be leveraged to raise climate awareness among the participants. A portal is to be built as the gateway to the climate@home project. Three types of roles and the corresponding functionalities are designed and supported. The end users include the citizen participants, climate scientists, and project managers. Citizen participants connect their computing resources to the platform by downloading and installing a computing engine on their personal computers. Computer climate models are defined at the server side. Climate scientists configure computer model parameters through the portal user interface. After model configuration, scientists then launch the computing task. Next, data is atomized and distributed to computing engines that are running on citizen participants' computers. Scientists will receive notifications on the completion of computing tasks, and examine modeling results via visualization modules of the portal. Computing tasks, computing resources, and participants are managed by project managers via portal tools. A portal prototype has been built for proof of concept. Three forums have been setup for different groups of users to share information on science aspect, technology aspect, and educational outreach aspect. A facebook account has been setup to distribute messages via the most popular social networking platform. New treads are synchronized from the forums to facebook. A mapping tool displays geographic locations of the participants and the status of tasks on each client node. A group of users have been invited to test functions such as forums, blogs, and computing resource monitoring.
A cloud-based workflow to quantify transcript-expression levels in public cancer compendia
Tatlow, PJ; Piccolo, Stephen R.
2016-01-01
Public compendia of sequencing data are now measured in petabytes. Accordingly, it is infeasible for researchers to transfer these data to local computers. Recently, the National Cancer Institute began exploring opportunities to work with molecular data in cloud-computing environments. With this approach, it becomes possible for scientists to take their tools to the data and thereby avoid large data transfers. It also becomes feasible to scale computing resources to the needs of a given analysis. We quantified transcript-expression levels for 12,307 RNA-Sequencing samples from the Cancer Cell Line Encyclopedia and The Cancer Genome Atlas. We used two cloud-based configurations and examined the performance and cost profiles of each configuration. Using preemptible virtual machines, we processed the samples for as little as $0.09 (USD) per sample. As the samples were processed, we collected performance metrics, which helped us track the duration of each processing step and quantified computational resources used at different stages of sample processing. Although the computational demands of reference alignment and expression quantification have decreased considerably, there remains a critical need for researchers to optimize preprocessing steps. We have stored the software, scripts, and processed data in a publicly accessible repository (https://osf.io/gqrz9). PMID:27982081
NASA Technical Reports Server (NTRS)
Deese, J. E.; Agarwal, R. K.
1989-01-01
Computational fluid dynamics has an increasingly important role in the design and analysis of aircraft as computer hardware becomes faster and algorithms become more efficient. Progress is being made in two directions: more complex and realistic configurations are being treated and algorithms based on higher approximations to the complete Navier-Stokes equations are being developed. The literature indicates that linear panel methods can model detailed, realistic aircraft geometries in flow regimes where this approximation is valid. As algorithms including higher approximations to the Navier-Stokes equations are developed, computer resource requirements increase rapidly. Generation of suitable grids become more difficult and the number of grid points required to resolve flow features of interest increases. Recently, the development of large vector computers has enabled researchers to attempt more complex geometries with Euler and Navier-Stokes algorithms. The results of calculations for transonic flow about a typical transport and fighter wing-body configuration using thin layer Navier-Stokes equations are described along with flow about helicopter rotor blades using both Euler/Navier-Stokes equations.
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
WE-B-BRD-01: Innovation in Radiation Therapy Planning II: Cloud Computing in RT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, K; Kagadis, G; Xing, L
As defined by the National Institute of Standards and Technology, cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Despite the omnipresent role of computers in radiotherapy, cloud computing has yet to achieve widespread adoption in clinical or research applications, though the transition to such “on-demand” access is underway. As this transition proceeds, new opportunities for aggregate studies and efficient use of computational resources are set againstmore » new challenges in patient privacy protection, data integrity, and management of clinical informatics systems. In this Session, current and future applications of cloud computing and distributed computational resources will be discussed in the context of medical imaging, radiotherapy research, and clinical radiation oncology applications. Learning Objectives: Understand basic concepts of cloud computing. Understand how cloud computing could be used for medical imaging applications. Understand how cloud computing could be employed for radiotherapy research.4. Understand how clinical radiotherapy software applications would function in the cloud.« less
NASA Astrophysics Data System (ADS)
Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel
2015-12-01
We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.
2012-01-01
Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423
User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework
Markstrom, Steven L.; Koczot, Kathryn M.
2008-01-01
The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.
CSNS computing environment Based on OpenStack
NASA Astrophysics Data System (ADS)
Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu
2017-10-01
Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.
Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Bloem, Michael J.
2014-01-01
In air traffic management systems, airspace is partitioned into regions in part to distribute the tasks associated with managing air traffic among different systems and people. These regions, as well as the systems and people allocated to each, are changed dynamically so that air traffic can be safely and efficiently managed. It is expected that new air traffic control systems will enable greater flexibility in how airspace is partitioned and how resources are allocated to airspace regions. In this talk, I will begin by providing an overview of some previous work and open questions in Dynamic Airspace Configuration research, which is concerned with how to partition airspace and assign resources to regions of airspace. For example, I will introduce airspace partitioning algorithms based on clustering, integer programming optimization, and computational geometry. I will conclude by discussing the development of a tablet-based tool that is intended to help air traffic controller supervisors configure airspace and controllers in current operations.
JINR cloud infrastructure evolution
NASA Astrophysics Data System (ADS)
Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.
2016-09-01
To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.
Managing computer-controlled operations
NASA Technical Reports Server (NTRS)
Plowden, J. B.
1985-01-01
A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.
Plancton: an opportunistic distributed computing project based on Docker containers
NASA Astrophysics Data System (ADS)
Concas, Matteo; Berzano, Dario; Bagnasco, Stefano; Lusso, Stefano; Masera, Massimo; Puccio, Maximiliano; Vallero, Sara
2017-10-01
The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton: a lightweight fire-and-forget background service. Plancton spawns and controls a local pool of Docker containers on a host with free resources, by constantly monitoring its CPU utilisation. It is designed to release the resources allocated opportunistically, whenever another demanding task is run by the host user, according to configurable policies. This is attained by killing a number of running containers. One of the advantages of a thin virtualization layer such as Linux containers is that they can be started almost instantly upon request. We will show how fast the start-up and disposal of containers eventually enables us to implement an opportunistic cluster based on Plancton daemons without a central control node, where the spawned Docker containers behave as job pilots. Finally, we will show how Plancton was configured to run up to 10 000 concurrent opportunistic jobs on the ALICE High-Level Trigger facility, by giving a considerable advantage in terms of management compared to virtual machines.
2011-01-01
Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105
A Cloud-Based Simulation Architecture for Pandemic Influenza Simulation
Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas
2011-01-01
High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089
IPv6 testing and deployment at Prague Tier 2
NASA Astrophysics Data System (ADS)
Kouba, Tomáŝ; Chudoba, Jiří; Eliáŝ, Marek; Fiala, Lukáŝ
2012-12-01
Computing Center of the Institute of Physics in Prague provides computing and storage resources for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT. However this solution demands more difficult routing setup. We see the IPv6 deployment as a solution that provides less routing, more switching and therefore promises higher network throughput. The administrators of the Computing Center strive to configure and install all provided services automatically. For installation tasks we use PXE and kickstart, for network configuration we use DHCP and for software configuration we use CFEngine. Many hardware boxes are configured via specific web pages or telnet/ssh protocol provided by the box itself. All our services are monitored with several tools e.g. Nagios, Munin, Ganglia. We rely heavily on the SNMP protocol for hardware health monitoring. All these installation, configuration and monitoring tools must be tested before we can switch completely to IPv6 network stack. In this contribution we present the tests we have made, limitations we have faced and configuration decisions that we have made during IPv6 testing. We also present testbed built on virtual machines that was used for all the testing and evaluation.
Simulating pad-electrodes with high-definition arrays in transcranial electric stimulation
NASA Astrophysics Data System (ADS)
Kempe, René; Huang, Yu; Parra, Lucas C.
2014-04-01
Objective. Research studies on transcranial electric stimulation, including direct current, often use a computational model to provide guidance on the placing of sponge-electrode pads. However, the expertise and computational resources needed for finite element modeling (FEM) make modeling impractical in a clinical setting. Our objective is to make the exploration of different electrode configurations accessible to practitioners. We provide an efficient tool to estimate current distributions for arbitrary pad configurations while obviating the need for complex simulation software. Approach. To efficiently estimate current distributions for arbitrary pad configurations we propose to simulate pads with an array of high-definition (HD) electrodes and use an efficient linear superposition to then quickly evaluate different electrode configurations. Main results. Numerical results on ten different pad configurations on a normal individual show that electric field intensity simulated with the sampled array deviates from the solutions with pads by only 5% and the locations of peak magnitude fields have a 94% overlap when using a dense array of 336 electrodes. Significance. Computationally intensive FEM modeling of the HD array needs to be performed only once, perhaps on a set of standard heads that can be made available to multiple users. The present results confirm that by using these models one can now quickly and accurately explore and select pad-electrode montages to match a particular clinical need.
NASA Astrophysics Data System (ADS)
Sapra, Karan; Gupta, Saurabh; Atchley, Scott; Anantharaj, Valentine; Miller, Ross; Vazhkudai, Sudharshan
2016-04-01
Efficient resource utilization is critical for improved end-to-end computing and workflow of scientific applications. Heterogeneous node architectures, such as the GPU-enabled Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), present us with further challenges. In many HPC applications on Titan, the accelerators are the primary compute engines while the CPUs orchestrate the offloading of work onto the accelerators, and moving the output back to the main memory. On the other hand, applications that do not exploit GPUs, the CPU usage is dominant while the GPUs idle. We utilized Heterogenous Functional Partitioning (HFP) runtime framework that can optimize usage of resources on a compute node to expedite an application's end-to-end workflow. This approach is different from existing techniques for in-situ analyses in that it provides a framework for on-the-fly analysis on-node by dynamically exploiting under-utilized resources therein. We have implemented in the Community Earth System Model (CESM) a new concurrent diagnostic processing capability enabled by the HFP framework. Various single variate statistics, such as means and distributions, are computed in-situ by launching HFP tasks on the GPU via the node local HFP daemon. Since our current configuration of CESM does not use GPU resources heavily, we can move these tasks to GPU using the HFP framework. Each rank running the atmospheric model in CESM pushes the variables of of interest via HFP function calls to the HFP daemon. This node local daemon is responsible for receiving the data from main program and launching the designated analytics tasks on the GPU. We have implemented these analytics tasks in C and use OpenACC directives to enable GPU acceleration. This methodology is also advantageous while executing GPU-enabled configurations of CESM when the CPUs will be idle during portions of the runtime. In our implementation results, we demonstrate that it is more efficient to use HFP framework to offload the tasks to GPUs instead of doing it in the main application. We observe increased resource utilization and overall productivity in this approach by using HFP framework for end-to-end workflow.
Method and tool for network vulnerability analysis
Swiler, Laura Painton [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM
2006-03-14
A computer system analysis tool and method that will allow for qualitative and quantitative assessment of security attributes and vulnerabilities in systems including computer networks. The invention is based on generation of attack graphs wherein each node represents a possible attack state and each edge represents a change in state caused by a single action taken by an attacker or unwitting assistant. Edges are weighted using metrics such as attacker effort, likelihood of attack success, or time to succeed. Generation of an attack graph is accomplished by matching information about attack requirements (specified in "attack templates") to information about computer system configuration (contained in a configuration file that can be updated to reflect system changes occurring during the course of an attack) and assumed attacker capabilities (reflected in "attacker profiles"). High risk attack paths, which correspond to those considered suited to application of attack countermeasures given limited resources for applying countermeasures, are identified by finding "epsilon optimal paths."
NASA Astrophysics Data System (ADS)
Wu, W. H.; Chao, D. Y.
2016-07-01
Traditional region-based liveness-enforcing supervisors focus on (1) maximal permissiveness of not losing legal states, (2) structural simplicity of minimal number of monitors, and (3) fast computation. Lately, a number of similar approaches can achieve minimal configuration using efficient linear programming. However, it is unclear as to the relationship between the minimal configuration and the net structure. It is important to explore the structures involved for the fewest monitors required. Once the lower bound is achieved, further iteration to merge (or reduce the number of) monitors is not necessary. The minimal strongly connected resource subnet (i.e., all places are resources) that contains the set of resource places in a basic siphon is an elementary circuit. Earlier, we showed that the number of monitors required for liveness-enforcing and maximal permissiveness equals that of basic siphons for a subclass of Petri nets modelling manufacturing, called α systems. This paper extends this to systems more powerful than the α one so that the number of monitors in a minimal configuration remains to be lower bounded by that of basic siphons. This paper develops the theory behind and shows examples.
Optimized planning methodologies of ASON implementation
NASA Astrophysics Data System (ADS)
Zhou, Michael M.; Tamil, Lakshman S.
2005-02-01
Advanced network planning concerns effective network-resource allocation for dynamic and open business environment. Planning methodologies of ASON implementation based on qualitative analysis and mathematical modeling are presented in this paper. The methodology includes method of rationalizing technology and architecture, building network and nodal models, and developing dynamic programming for multi-period deployment. The multi-layered nodal architecture proposed here can accommodate various nodal configurations for a multi-plane optical network and the network modeling presented here computes the required network elements for optimizing resource allocation.
Predictive Software Cost Model Study. Volume II. Software Package Detailed Data.
1980-06-01
will not be limited to: a. ASN-91 NWDS Computer b. Armament System Control Unit ( ASCU ) c. AN/ASN-90 IMS 6. CONFIGURATION CONTROL. OFP/OTP...planned approach. 3. Detailed analysis and study; impacts on hardware, manuals, data, AGE , etc; alternatives with pros and cons; cost estimates; ECP...WAIT UNTIL RESOURCE REQUEST FOR * : HAG TAPE HAS BEEN FULFILLED )MTS 0 RI * Ae* NESDIIRCE MAG TAPE (SHORT FORM)I:TST IN I" . TEST " AG TAPE RESOURCE
Localized overlap algorithm for unexpanded dispersion energies
NASA Astrophysics Data System (ADS)
Rob, Fazle; Misquitta, Alston J.; Podeszwa, Rafał; Szalewicz, Krzysztof
2014-03-01
First-principles-based, linearly scaling algorithm has been developed for calculations of dispersion energies from frequency-dependent density susceptibility (FDDS) functions with account of charge-overlap effects. The transition densities in FDDSs are fitted by a set of auxiliary atom-centered functions. The terms in the dispersion energy expression involving products of such functions are computed using either the unexpanded (exact) formula or from inexpensive asymptotic expansions, depending on the location of these functions relative to the dimer configuration. This approach leads to significant savings of computational resources. In particular, for a dimer consisting of two elongated monomers with 81 atoms each in a head-to-head configuration, the most favorable case for our algorithm, a 43-fold speedup has been achieved while the approximate dispersion energy differs by less than 1% from that computed using the standard unexpanded approach. In contrast, the dispersion energy computed from the distributed asymptotic expansion differs by dozens of percent in the van der Waals minimum region. A further increase of the size of each monomer would result in only small increased costs since all the additional terms would be computed from the asymptotic expansion.
Computer simulation of a single pilot flying a modern high-performance helicopter
NASA Technical Reports Server (NTRS)
Zipf, Mark E.; Vogt, William G.; Mickle, Marlin H.; Hoelzeman, Ronald G.; Kai, Fei; Mihaloew, James R.
1988-01-01
Presented is a computer simulation of a human response pilot model able to execute operational flight maneuvers and vehicle stabilization of a modern high-performance helicopter. Low-order, single-variable, human response mechanisms, integrated to form a multivariable pilot structure, provide a comprehensive operational control over the vehicle. Evaluations of the integrated pilot were performed by direct insertion into a nonlinear, total-force simulation environment provided by NASA Lewis. Comparisons between the integrated pilot structure and single-variable pilot mechanisms are presented. Static and dynamically alterable configurations of the pilot structure are introduced to simulate pilot activities during vehicle maneuvers. These configurations, in conjunction with higher level, decision-making processes, are considered for use where guidance and navigational procedures, operational mode transfers, and resource sharing are required.
Spaceport Processing System Development Lab
NASA Technical Reports Server (NTRS)
Dorsey, Michael
2013-01-01
The Spaceport Processing System Development Lab (SPSDL), developed and maintained by the Systems Hardware and Engineering Branch (NE-C4), is a development lab with its own private/restricted networks. A private/restricted network is a network with restricted or no communication with other networks. This allows users from different groups to work on their own projects in their own configured environment without interfering with others utilizing their resources in the lab. The different networks being used in the lab have no way to talk with each other due to the way they are configured, so how a user configures his software, operating system, or the equipment doesn't interfere or carry over on any of the other networks in the lab. The SPSDL is available for any project in KSC that is in need of a lab environment. My job in the SPSDL was to assist in maintaining the lab to make sure it's accessible for users. This includes, but is not limited to, making sure the computers in the lab are properly running and patched with updated hardware/software. In addition to this, I also was to assist users who had issues in utilizing the resources in the lab, which may include helping to configure a restricted network for their own environment. All of this was to ensure workers were able to use the SPSDL to work on their projects without difficulty which would in turn, benefit the work done throughout KSC. When I wasn't working in the SPSDL, I would instead help other coworkers with smaller tasks which included, but wasn't limited to, the proper disposal, moving of, or search for essential equipment. I also, during the free time I had, used NASA's resources to increase my knowledge and skills in a variety of subjects related to my major as a computer engineer, particularly in UNIX, Networking, and Embedded Systems.
Scheduling based on a dynamic resource connection
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Botygin, I. A.; Shersntneva, A. I.; Konyaev, P. A.
2017-02-01
The practical using of distributed computing systems associated with many problems, including troubles with the organization of an effective interaction between the agents located at the nodes of the system, with the specific configuration of each node of the system to perform a certain task, with the effective distribution of the available information and computational resources of the system, with the control of multithreading which implements the logic of solving research problems and so on. The article describes the method of computing load balancing in distributed automatic systems, focused on the multi-agency and multi-threaded data processing. The scheme of the control of processing requests from the terminal devices, providing the effective dynamic scaling of computing power under peak load is offered. The results of the model experiments research of the developed load scheduling algorithm are set out. These results show the effectiveness of the algorithm even with a significant expansion in the number of connected nodes and zoom in the architecture distributed computing system.
Eruptive event generator based on the Gibson-Low magnetic configuration
NASA Astrophysics Data System (ADS)
Borovikov, D.; Sokolov, I. V.; Manchester, W. B.; Jin, M.; Gombosi, T. I.
2017-08-01
Coronal mass ejections (CMEs), a kind of energetic solar eruptions, are an integral subject of space weather research. Numerical magnetohydrodynamic (MHD) modeling, which requires powerful computational resources, is one of the primary means of studying the phenomenon. With increasing accessibility of such resources, grows the demand for user-friendly tools that would facilitate the process of simulating CMEs for scientific and operational purposes. The Eruptive Event Generator based on Gibson-Low flux rope (EEGGL), a new publicly available computational model presented in this paper, is an effort to meet this demand. EEGGL allows one to compute the parameters of a model flux rope driving a CME via an intuitive graphical user interface. We provide a brief overview of the physical principles behind EEGGL and its functionality. Ways toward future improvements of the tool are outlined.
ERDC MSRC Resource. High Performance Computing for the Warfighter. Spring 2006
2006-01-01
named Ruby, and the HP/Compaq SC45, named Emerald , continue to add their unique sparkle to the ERDC MSRC computer infrastructure. ERDC invited the...configuration on B-52H purchased additional memory for the login nodes so that this part of the solution process could be done as a preprocessing step. On...application and system services. Of the service nodes, 10 are login nodes and 23 are input/output (I/O) server nodes for the Lustre file system (i.e., the
Perspectives on the Future of CFD
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2000-01-01
This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.
Machine learning prediction for classification of outcomes in local minimisation
NASA Astrophysics Data System (ADS)
Das, Ritankar; Wales, David J.
2017-01-01
Machine learning schemes are employed to predict which local minimum will result from local energy minimisation of random starting configurations for a triatomic cluster. The input data consists of structural information at one or more of the configurations in optimisation sequences that converge to one of four distinct local minima. The ability to make reliable predictions, in terms of the energy or other properties of interest, could save significant computational resources in sampling procedures that involve systematic geometry optimisation. Results are compared for two energy minimisation schemes, and for neural network and quadratic functions of the inputs.
CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.
2011-01-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404
CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W
2012-06-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
CoreTSAR: Core Task-Size Adapting Runtime
Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...
2014-10-27
Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less
A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)
NASA Technical Reports Server (NTRS)
Rhew, Ray D.; Parker, Peter A.
2007-01-01
Design of Experiments (DOE) was applied to the LAS geometric parameter study to efficiently identify and rank primary contributors to integrated drag over the vehicles ascent trajectory in an order of magnitude fewer CFD configurations thereby reducing computational resources and solution time. SME s were able to gain a better understanding on the underlying flowphysics of different geometric parameter configurations through the identification of interaction effects. An interaction effect, which describes how the effect of one factor changes with respect to the levels of other factors, is often the key to product optimization. A DOE approach emphasizes a sequential approach to learning through successive experimentation to continuously build on previous knowledge. These studies represent a starting point for expanded experimental activities that will eventually cover the entire design space of the vehicle and flight trajectory.
How Much Higher Can HTCondor Fly?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fajardo, E. M.; Dost, J. M.; Holzman, B.
The HTCondor high throughput computing system is heavily used in the high energy physics (HEP) community as the batch system for several Worldwide LHC Computing Grid (WLCG) resources. Moreover, it is the backbone of GlidelnWMS, the pilot system used by the computing organization of the Compact Muon Solenoid (CMS) experiment. To prepare for LHC Run 2, we probed the scalability limits of new versions and configurations of HTCondor with a goal of reaching 200,000 simultaneous running jobs in a single internationally distributed dynamic pool.In this paper, we first describe how we created an opportunistic distributed testbed capable of exercising runsmore » with 200,000 simultaneous jobs without impacting production. This testbed methodology is appropriate not only for scale testing HTCondor, but potentially for many other services. In addition to the test conditions and the testbed topology, we include the suggested configuration options used to obtain the scaling results, and describe some of the changes to HTCondor inspired by our testing that enabled sustained operations at scales well beyond previous limits.« less
Correlation energy extrapolation by many-body expansion
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...
2017-01-09
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Correlation energy extrapolation by many-body expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Dynamic Reconfiguration of a RGBD Sensor Based on QoS and QoC Requirements in Distributed Systems.
Munera, Eduardo; Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Noguera, Juan Fco Blanes
2015-07-24
The inclusion of embedded sensors into a networked system provides useful information for many applications. A Distributed Control System (DCS) is one of the clearest examples where processing and communications are constrained by the client's requirements and the capacity of the system. An embedded sensor with advanced processing and communications capabilities supplies high level information, abstracting from the data acquisition process and objects recognition mechanisms. The implementation of an embedded sensor/actuator as a Smart Resource permits clients to access sensor information through distributed network services. Smart resources can offer sensor services as well as computing, communications and peripheral access by implementing a self-aware based adaptation mechanism which adapts the execution profile to the context. On the other hand, information integrity must be ensured when computing processes are dynamically adapted. Therefore, the processing must be adapted to perform tasks in a certain lapse of time but always ensuring a minimum process quality. In the same way, communications must try to reduce the data traffic without excluding relevant information. The main objective of the paper is to present a dynamic configuration mechanism to adapt the sensor processing and communication to the client's requirements in the DCS. This paper describes an implementation of a smart resource based on a Red, Green, Blue, and Depth (RGBD) sensor in order to test the dynamic configuration mechanism presented.
A framework for analyzing the cognitive complexity of computer-assisted clinical ordering.
Horsky, Jan; Kaufman, David R; Oppenheim, Michael I; Patel, Vimla L
2003-01-01
Computer-assisted provider order entry is a technology that is designed to expedite medical ordering and to reduce the frequency of preventable errors. This paper presents a multifaceted cognitive methodology for the characterization of cognitive demands of a medical information system. Our investigation was informed by the distributed resources (DR) model, a novel approach designed to describe the dimensions of user interfaces that introduce unnecessary cognitive complexity. This method evaluates the relative distribution of external (system) and internal (user) representations embodied in system interaction. We conducted an expert walkthrough evaluation of a commercial order entry system, followed by a simulated clinical ordering task performed by seven clinicians. The DR model was employed to explain variation in user performance and to characterize the relationship of resource distribution and ordering errors. The analysis revealed that the configuration of resources in this ordering application placed unnecessarily heavy cognitive demands on the user, especially on those who lacked a robust conceptual model of the system. The resources model also provided some insight into clinicians' interactive strategies and patterns of associated errors. Implications for user training and interface design based on the principles of human-computer interaction in the medical domain are discussed.
Towards a Global Service Registry for the World-Wide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro
2014-06-01
The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the current situation and how it can support the evolution of information systems.
Petrovici, Mihai A.; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks. PMID:25303102
Web-based reactive transport modeling using PFLOTRAN
NASA Astrophysics Data System (ADS)
Zhou, H.; Karra, S.; Lichtner, P. C.; Versteeg, R.; Zhang, Y.
2017-12-01
Actionable understanding of system behavior in the subsurface is required for a wide spectrum of societal and engineering needs by both commercial firms and government entities and academia. These needs include, for example, water resource management, precision agriculture, contaminant remediation, unconventional energy production, CO2 sequestration monitoring, and climate studies. Such understanding requires the ability to numerically model various coupled processes that occur across different temporal and spatial scales as well as multiple physical domains (reservoirs - overburden, surface-subsurface, groundwater-surface water, saturated-unsaturated zone). Currently, this ability is typically met through an in-house approach where computational resources, model expertise, and data for model parameterization are brought together to meet modeling needs. However, such an approach has multiple drawbacks which limit the application of high-end reactive transport codes such as the Department of Energy funded[?] PFLOTRAN code. In addition, while many end users have a need for the capabilities provided by high-end reactive transport codes, they do not have the expertise - nor the time required to obtain the expertise - to effectively use these codes. We have developed and are actively enhancing a cloud-based software platform through which diverse users are able to easily configure, execute, visualize, share, and interpret PFLOTRAN models. This platform consists of a web application and available on-demand HPC computational infrastructure. The web application consists of (1) a browser-based graphical user interface which allows users to configure models and visualize results interactively, and (2) a central server with back-end relational databases which hold configuration, data, modeling results, and Python scripts for model configuration, and (3) a HPC environment for on-demand model execution. We will discuss lessons learned in the development of this platform, the rationale for different interfaces, implementation choices, as well as the planned path forward.
Petrovici, Mihai A; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz
2014-01-01
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.
Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E
2012-03-19
A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.
Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community
2012-01-01
Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538
Galaxy CloudMan: delivering cloud compute clusters.
Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James
2010-12-21
Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.
Integration of XRootD into the cloud infrastructure for ALICE data analysis
NASA Astrophysics Data System (ADS)
Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey
2015-12-01
Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.
Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform
NASA Astrophysics Data System (ADS)
Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian
2017-04-01
The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust configuration is based on cloud computing and allows the installation on a private or public cloud infrastructure. In this configuration, the processing resources can be dynamically allocated and the execution time can be considerably improved by the available virtual resources and the number of parallelizable sequences in the processing flow. The presentation highlights the benefits and issues of the proposed solution by analyzing some significant experimental use cases. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Constantin Nandra, Dorian Gorgan: "Defining Earth data batch processing tasks by means of a flexible workflow description language", ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-4, 59-66, (2016). [3] Victor Bacu, Teodor Stefanut, Dorian Gorgan, "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).
Intelligent self-organization methods for wireless ad hoc sensor networks based on limited resources
NASA Astrophysics Data System (ADS)
Hortos, William S.
2006-05-01
A wireless ad hoc sensor network (WSN) is a configuration for area surveillance that affords rapid, flexible deployment in arbitrary threat environments. There is no infrastructure support and sensor nodes communicate with each other only when they are in transmission range. To a greater degree than the terminals found in mobile ad hoc networks (MANETs) for communications, sensor nodes are resource-constrained, with limited computational processing, bandwidth, memory, and power, and are typically unattended once in operation. Consequently, the level of information exchange among nodes, to support any complex adaptive algorithms to establish network connectivity and optimize throughput, not only deplete those limited resources and creates high overhead in narrowband communications, but also increase network vulnerability to eavesdropping by malicious nodes. Cooperation among nodes, critical to the mission of sensor networks, can thus be disrupted by the inappropriate choice of the method for self-organization. Recent published contributions to the self-configuration of ad hoc sensor networks, e.g., self-organizing mapping and swarm intelligence techniques, have been based on the adaptive control of the cross-layer interactions found in MANET protocols to achieve one or more performance objectives: connectivity, intrusion resistance, power control, throughput, and delay. However, few studies have examined the performance of these algorithms when implemented with the limited resources of WSNs. In this paper, self-organization algorithms for the initiation, operation and maintenance of a network topology from a collection of wireless sensor nodes are proposed that improve the performance metrics significant to WSNs. The intelligent algorithm approach emphasizes low computational complexity, energy efficiency and robust adaptation to change, allowing distributed implementation with the actual limited resources of the cooperative nodes of the network. Extensions of the algorithms from flat topologies to two-tier hierarchies of sensor nodes are presented. Results from a few simulations of the proposed algorithms are compared to the published results of other approaches to sensor network self-organization in common scenarios. The estimated network lifetime and extent under static resource allocations are computed.
Aerodynamic shape optimization directed toward a supersonic transport using sensitivity analysis
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
This investigation was conducted from March 1994 to August 1995, primarily, to extend and implement the previously developed aerodynamic design optimization methodologies for the problems related to a supersonic transport design. These methods had demonstrated promise to improve the designs (more specifically, the shape) of aerodynamic surfaces, by coupling optimization algorithms (OA) with Computational Fluid Dynamics (CFD) algorithms via sensitivity analyses (SA) with surface definition methods from Computer Aided Design (CAD). The present extensions of this method and their supersonic implementations have produced wing section designs, delta wing designs, cranked-delta wing designs, and nacelle designs, all of which have been reported in the open literature. Despite the fact that these configurations were highly simplified to be of any practical or commercial use, they served the algorithmic and proof-of-concept objectives of the study very well. The primary cause for the configurational simplifications, other than the usual simplify-to-study the fundamentals reason, were the premature closing of the project. Only after the first of the originally intended three-year term, both the funds and the computer resources supporting the project were abruptly cut due to their severe shortages at the funding agency. Nonetheless, it was shown that the extended methodologies could be viable options in optimizing the design of not only an isolated single-component configuration, but also a multiple-component configuration in supersonic and viscous flow. This allowed designing with the mutual interference of the components being one of the constraints all along the evolution of the shapes.
Development and implementation of a PACS network and resource manager
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Taira, Ricky K.; Dwyer, Samuel J., III; Huang, H. K.
1992-07-01
Clinical acceptance of PACS is predicated upon maximum uptime. Upon component failure, detection, diagnosis, reconfiguration and repair must occur immediately. Our current PACS network is large, heterogeneous, complex and wide-spread geographically. The overwhelming number of network devices, computers and software processes involved in a departmental or inter-institutional PACS makes development of tools for network and resource management critical. The authors have developed and implemented a comprehensive solution (PACS Network-Resource Manager) using the OSI Network Management Framework with network element agents that respond to queries and commands for network management stations. Managed resources include: communication protocol layers for Ethernet, FDDI and UltraNet; network devices; computer and operating system resources; and application, database and network services. The Network-Resource Manager is currently being used for warning, fault, security violation and configuration modification event notification. Analysis, automation and control applications have been added so that PACS resources can be dynamically reconfigured and so that users are notified when active involvement is required. Custom data and error logging have been implemented that allow statistics for each PACS subsystem to be charted for performance data. The Network-Resource Manager allows our departmental PACS system to be monitored continuously and thoroughly, with a minimal amount of personal involvement and time.
Galaxy CloudMan: delivering cloud compute clusters
2010-01-01
Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983
Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers
NASA Technical Reports Server (NTRS)
Tumer, K.; Lawson, J.
2003-01-01
Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.
Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility
NASA Astrophysics Data System (ADS)
Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro
2014-06-01
In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post-execution scripts, and controlled handling of the failure of such scripts. This feature is heavily used, for example, at the INFN-Tier1 in order to check the health status of a worker node before execution of each job. Pre- and post-execution scripts are also important to let WNoDeS, the IaaS Cloud solution developed at INFN, use SLURM as its resource manager. WNoDeS has already been supporting the LSF and Torque batch systems for some time; in this work we show the work done so that WNoDeS supports SLURM as well. Finally, we show several performance tests that we carried on to verify SLURM scalability and reliability, detailing scalability tests both in terms of managed nodes and of queued jobs.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
Background As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Methods Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Results Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Conclusions Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. PMID:24464852
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Myer, Robert R.; Latorella, Kara A.; Comstock, James R., Jr.
2011-01-01
The Multi-Attribute Task Battery (MAT Battery). is a computer-based task designed to evaluate operator performance and workload, has been redeveloped to operate in Windows XP Service Pack 3, Windows Vista and Windows 7 operating systems.MATB-II includes essentially the same tasks as the original MAT Battery, plus new configuration options including a graphical user interface for controlling modes of operation. MATB-II can be executed either in training or testing mode, as defined by the MATB-II configuration file. The configuration file also allows set up of the default timeouts for the tasks, the flow rates of the pumps and tank levels of the Resource Management (RESMAN) task. MATB-II comes with a default event file that an experimenter can modify and adapt
NASA Astrophysics Data System (ADS)
Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin
2014-05-01
During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560,640 equivalent cores. Scientific applications, such as CESM, are also required to demonstrate a "computational readiness capability" to efficiently scale across and utilize 20% of the entire system. The 0,25 deg configuration of the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), the atmospheric component of CESM, has been demonstrated to scale efficiently across more than 5,000 nodes (80,000 CPU cores) on Titan. The tracer transport routines of CAM-SE have also been ported to take advantage of the hybrid many-core architecture of Titan using GPUs [see EGU2014-4233], yielding over 2X speedup when transporting over 100 tracers. The high throughput I/O in CESM, based on the Parallel IO Library (PIO), is being further augmented to support even higher resolutions and enhance resiliency. The application performance of the individual runs are archived in a database and routinely analyzed to identify and rectify performance degradation during the course of the experiments. The various resources available at the OLCF now support a scientific workflow to facilitate high-resolution climate modelling. A high-speed center-wide parallel file system, called ATLAS, capable of 1 TB/s, is available on Titan as well as on the clusters used for analysis (Rhea) and visualization (Lens/EVEREST). Long-term archive is facilitated by the HPSS storage system. The Earth System Grid (ESG), featuring search & discovery, is also used to deliver data. The end-to-end workflow allows OLCF users to efficiently share data and publish results in a timely manner.
Lunar Applications in Reconfigurable Computing
NASA Technical Reports Server (NTRS)
Somervill, Kevin
2008-01-01
NASA s Constellation Program is developing a lunar surface outpost in which reconfigurable computing will play a significant role. Reconfigurable systems provide a number of benefits over conventional software-based implementations including performance and power efficiency, while the use of standardized reconfigurable hardware provides opportunities to reduce logistical overhead. The current vision for the lunar surface architecture includes habitation, mobility, and communications systems, each of which greatly benefit from reconfigurable hardware in applications including video processing, natural feature recognition, data formatting, IP offload processing, and embedded control systems. In deploying reprogrammable hardware, considerations similar to those of software systems must be managed. There needs to be a mechanism for discovery enabling applications to locate and utilize the available resources. Also, application interfaces are needed to provide for both configuring the resources as well as transferring data between the application and the reconfigurable hardware. Each of these topics are explored in the context of deploying reconfigurable resources as an integral aspect of the lunar exploration architecture.
Wagner, Richard J.; Boulger, Robert W.; Oblinger, Carolyn J.; Smith, Brett A.
2006-01-01
The U.S. Geological Survey uses continuous water-quality monitors to assess the quality of the Nation's surface water. A common monitoring-system configuration for water-quality data collection is the four-parameter monitoring system, which collects temperature, specific conductance, dissolved oxygen, and pH data. Such systems also can be configured to measure other properties, such as turbidity or fluorescence. Data from sensors can be used in conjunction with chemical analyses of samples to estimate chemical loads. The sensors that are used to measure water-quality field parameters require careful field observation, cleaning, and calibration procedures, as well as thorough procedures for the computation and publication of final records. This report provides guidelines for site- and monitor-selection considerations; sensor inspection and calibration methods; field procedures; data evaluation, correction, and computation; and record-review and data-reporting processes, which supersede the guidelines presented previously in U.S. Geological Survey Water-Resources Investigations Report WRIR 00-4252. These procedures have evolved over the past three decades, and the process continues to evolve with newer technologies.
Towards optimizing server performance in an educational MMORPG for teaching computer programming
NASA Astrophysics Data System (ADS)
Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios
2013-10-01
Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.
Using HPC within an operational forecasting configuration
NASA Astrophysics Data System (ADS)
Jagers, H. R. A.; Genseberger, M.; van den Broek, M. A. F. H.
2012-04-01
Various natural disasters are caused by high-intensity events, for example: extreme rainfall can in a short time cause major damage in river catchments, storms can cause havoc in coastal areas. To assist emergency response teams in operational decisions, it's important to have reliable information and predictions as soon as possible. This starts before the event by providing early warnings about imminent risks and estimated probabilities of possible scenarios. In the context of various applications worldwide, Deltares has developed an open and highly configurable forecasting and early warning system: Delft-FEWS. Finding the right balance between simulation time (and hence prediction lead time) and simulation accuracy and detail is challenging. Model resolution may be crucial to capture certain critical physical processes. Uncertainty in forcing conditions may require running large ensembles of models; data assimilation techniques may require additional ensembles and repeated simulations. The computational demand is steadily increasing and data streams become bigger. Using HPC resources is a logical step; in different settings Delft-FEWS has been configured to take advantage of distributed computational resources available to improve and accelerate the forecasting process (e.g. Montanari et al, 2006). We will illustrate the system by means of a couple of practical applications including the real-time dynamic forecasting of wind driven waves, flow of water, and wave overtopping at dikes of Lake IJssel and neighboring lakes in the center of The Netherlands. Montanari et al., 2006. Development of an ensemble flood forecasting system for the Po river basin, First MAP D-PHASE Scientific Meeting, 6-8 November 2006, Vienna, Austria.
Krishnan, Ranjani; Walton, Emily B; Van Vliet, Krystyn J
2009-11-01
As computational resources increase, molecular dynamics simulations of biomolecules are becoming an increasingly informative complement to experimental studies. In particular, it has now become feasible to use multiple initial molecular configurations to generate an ensemble of replicate production-run simulations that allows for more complete characterization of rare events such as ligand-receptor unbinding. However, there are currently no explicit guidelines for selecting an ensemble of initial configurations for replicate simulations. Here, we use clustering analysis and steered molecular dynamics simulations to demonstrate that the configurational changes accessible in molecular dynamics simulations of biomolecules do not necessarily correlate with observed rare-event properties. This informs selection of a representative set of initial configurations. We also employ statistical analysis to identify the minimum number of replicate simulations required to sufficiently sample a given biomolecular property distribution. Together, these results suggest a general procedure for generating an ensemble of replicate simulations that will maximize accurate characterization of rare-event property distributions in biomolecules.
Two-Dimensional High-Lift Aerodynamic Optimization Using Neural Networks
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. The 'pressure difference rule,' which states that the maximum lift condition corresponds to a certain pressure difference between the peak suction pressure and the pressure at the trailing edge of the element, was applied and verified with experimental observations for this configuration. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural nets were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 44% compared with traditional gradient-based optimization procedures for multiple optimization runs.
Telescience Support Center Data System Software
NASA Technical Reports Server (NTRS)
Rahman, Hasan
2010-01-01
The Telescience Support Center (TSC) team has developed a databasedriven, increment-specific Data Require - ment Document (DRD) generation tool that automates much of the work required for generating and formatting the DRD. It creates a database to load the required changes to configure the TSC data system, thus eliminating a substantial amount of labor in database entry and formatting. The TSC database contains the TSC systems configuration, along with the experimental data, in which human physiological data must be de-commutated in real time. The data for each experiment also must be cataloged and archived for future retrieval. TSC software provides tools and resources for ground operation and data distribution to remote users consisting of PIs (principal investigators), bio-medical engineers, scientists, engineers, payload specialists, and computer scientists. Operations support is provided for computer systems access, detailed networking, and mathematical and computational problems of the International Space Station telemetry data. User training is provided for on-site staff and biomedical researchers and other remote personnel in the usage of the space-bound services via the Internet, which enables significant resource savings for the physical facility along with the time savings versus traveling to NASA sites. The software used in support of the TSC could easily be adapted to other Control Center applications. This would include not only other NASA payload monitoring facilities, but also other types of control activities, such as monitoring and control of the electric grid, chemical, or nuclear plant processes, air traffic control, and the like.
R&D100: Lightweight Distributed Metric Service
Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike
2018-06-12
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
R&D100: Lightweight Distributed Metric Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann; Brandt, Jim; Tucker, Tom
2015-11-19
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
Simulation of multistage turbine flows
NASA Technical Reports Server (NTRS)
Adamczyk, John J.; Mulac, Richard A.
1987-01-01
A flow model has been developed for analyzing multistage turbomachinery flows. This model, referred to as the average passage flow model, describes the time-averaged flow field with a typical passage of a blade row embedded within a multistage configuration. Computer resource requirements, supporting empirical modeling, formulation code development, and multitasking and storage are discussed. Illustrations from simulations of the space shuttle main engine (SSME) fuel turbine performed to date are given.
Contributing opportunistic resources to the grid with HTCondor-CE-Bosco
NASA Astrophysics Data System (ADS)
Weitzel, Derek; Bockelman, Brian
2017-10-01
The HTCondor-CE [1] is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install, configure, and maintain the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node, updating several configuration files, and implementing grid authentication mechanisms. On the other hand, accessing remote clusters from personal computers has been dramatically improved with Bosco: site admins only need to setup SSH public key authentication and appropriate accounts on a login host. In this paper, we take a new approach with the HTCondor-CE-Bosco, a CE which combines the flexibility and reliability of the HTCondor-CE with the easy-to-install Bosco. The administrators of the opportunistic resource are not required to install any software: only SSH access and a user account are required from the host site. The OSG can then run the grid-specific portions from a central location. This provides a new, more centralized, model for running grid services, which complements the traditional distributed model. We will show the architecture of a HTCondor-CE-Bosco enabled site, as well as feedback from multiple sites that have deployed it.
Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets.
Heath, Allison P; Greenway, Matthew; Powell, Raymond; Spring, Jonathan; Suarez, Rafael; Hanley, David; Bandlamudi, Chai; McNerney, Megan E; White, Kevin P; Grossman, Robert L
2014-01-01
As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
GI-conf: A configuration tool for the GI-cat distributed catalog
NASA Astrophysics Data System (ADS)
Papeschi, F.; Boldrini, E.; Bigagli, L.; Mazzetti, P.
2009-04-01
In this work we present a configuration tool for the GI-cat. In an Service-Oriented Architecture (SOA) framework, GI-cat implements a distributed catalog service providing advanced capabilities, such as: caching, brokering and mediation functionalities. GI-cat applies a distributed approach, being able to distribute queries to the remote service providers of interest in an asynchronous style, and notifies the status of the queries to the caller implementing an incremental feedback mechanism. Today, GI-cat functionalities are made available through two standard catalog interfaces: the OGC CSW ISO and CSW Core Application Profiles. However, two other interfaces are under testing: the CIM and the EO Extension Packages of the CSW ebRIM Application Profile. GI-cat is able to interface a multiplicity of discovery and access services serving heterogeneous Earth and Space Sciences resources. They include international standards like the OGC Web Services -i.e. OGC CSW, WCS, WFS and WMS, as well as interoperability arrangements (i.e. community standards) such as: UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services, and SibESS-C infrastructure services. GI-conf implements user-friendly configuration tool for GI-cat. This is a GUI application that employs a visual and very simple approach to configure both the GI-cat publishing and distribution capabilities, in a dynamic way. The tool allows to set one or more GI-cat configurations. Each configuration consists of: a) the catalog standards interfaces published by GI-cat; b) the resources (i.e. services/servers) to be accessed and mediated -i.e. federated. Simple icons are used for interfaces and resources, implementing a user-friendly visual approach. The main GI-conf functionalities are: • Interfaces and federated resources management: user can set which interfaces must be published; besides, she/he can add a new resource, update or remove an already federated resource. • Multiple configuration management: multiple GI-cat configurations can be defined; every configuration identifies a set of published interfaces and a set of federated resources. Configurations can be edited, added, removed, exported, and even imported. • HTML report creation: an HTML report can be created, showing the current active GI-cat configuration, including the resources that are being federated and the published interface endpoints. The configuration tool is shipped with GI-cat and can be used to configure the service after its installation is completed.
Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha
2016-02-27
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-03-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-01-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335
A resilient and secure software platform and architecture for distributed spacecraft
NASA Astrophysics Data System (ADS)
Otte, William R.; Dubey, Abhishek; Karsai, Gabor
2014-06-01
A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.
Software Manages Documentation in a Large Test Facility
NASA Technical Reports Server (NTRS)
Gurneck, Joseph M.
2001-01-01
The 3MCS computer program assists and instrumentation engineer in performing the 3 essential functions of design, documentation, and configuration management of measurement and control systems in a large test facility. Services provided by 3MCS are acceptance of input from multiple engineers and technicians working at multiple locations;standardization of drawings;automated cross-referencing; identification of errors;listing of components and resources; downloading of test settings; and provision of information to customers.
Variable Generation Power Forecasting as a Big Data Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haupt, Sue Ellen; Kosovic, Branko
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
Variable Generation Power Forecasting as a Big Data Problem
Haupt, Sue Ellen; Kosovic, Branko
2016-10-10
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
Generic Divide and Conquer Internet-Based Computing
NASA Technical Reports Server (NTRS)
Follen, Gregory J. (Technical Monitor); Radenski, Atanas
2003-01-01
The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end Internet nodes. Our project is focused on a generic divide and conquer paradigm and on mobile applications of this paradigm that can operate on a loose and ever changing pool of lower-end Internet nodes.
NASA Technical Reports Server (NTRS)
Atwood, Christopher A.
1993-01-01
The June 1992 to May 1993 grant NCC-2-677 provided for the continued demonstration of Computational Fluid Dynamics (CFD) as applied to the Stratospheric Observatory for Infrared Astronomy (SOFIA). While earlier grant years allowed validation of CFD through comparison against experiments, this year a new design proposal was evaluated. The new configuration would place the cavity aft of the wing, as opposed to the earlier baseline which was located immediately aft of the cockpit. This aft cavity placement allows for simplified structural and aircraft modification requirements, thus lowering the program cost of this national astronomy resource. Three appendices concerning this subject are presented.
NASA Technical Reports Server (NTRS)
Rutishauser, David
2006-01-01
The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.
Portable classroom leads to partnership.
Le Ber, Jeanne Marie; Lombardo, Nancy T; Weber, Alice; Bramble, John
2004-01-01
Library faculty participation on the School of Medicine Curriculum Steering Committee led to a unique opportunity to partner technology and teaching utilizing the library's portable wireless classroom. The pathology lab course master expressed a desire to revise the curriculum using patient cases and direct access to the Web and library resources. Since the pathology lab lacked computers, the library's portable wireless classroom provided a solution. Originally developed to provide maximum portability and flexibility, the wireless classroom consists of ten laptop computers configured with wireless cards and an access point. While the portable wireless classroom led to a partnership with the School of Medicine, there were additional benefits and positive consequences for the library.
CloudMan as a platform for tool, data, and analysis distribution.
Afgan, Enis; Chapman, Brad; Taylor, James
2012-11-27
Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions.
Jammer Localization Using Wireless Devices with Mitigation by Self-Configuration
Ashraf, Qazi Mamoon; Habaebi, Mohamed Hadi; Islam, Md. Rafiqul
2016-01-01
Communication abilities of a wireless network decrease significantly in the presence of a jammer. This paper presents a reactive technique, to detect and locate the position of a jammer using a distributed collection of wireless sensor devices. We employ the theory of autonomic computing as a framework to design the same. Upon detection of a jammer, the affected nodes self-configure their power consumption which stops unnecessary waste of battery resources. The scheme then proceeds to determine the approximate location of the jammer by analysing the location of active nodes as well as the affected nodes. This is done by employing a circular curve fitting algorithm. Results indicate a high degree of accuracy in localizing a jammer has been achieved. PMID:27583378
Uniformity on the grid via a configuration framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Igor V Terekhov et al.
2003-03-11
As Grid permeates modern computing, Grid solutions continue to emerge and take shape. The actual Grid development projects continue to provide higher-level services that evolve in functionality and operate with application-level concepts which are often specific to the virtual organizations that use them. Physically, however, grids are comprised of sites whose resources are diverse and seldom project readily onto a grid's set of concepts. In practice, this also creates problems for site administrators who actually instantiate grid services. In this paper, we present a flexible, uniform framework to configure a grid site and its facilities, and otherwise describe the resourcesmore » and services it offers. We start from a site configuration and instantiate services for resource advertisement, monitoring and data handling; we also apply our framework to hosting environment creation. We use our ideas in the Information Management part of the SAM-Grid project, a grid system which will deliver petabyte-scale data to the hundreds of users. Our users are High Energy Physics experimenters who are scattered worldwide across dozens of institutions and always use facilities that are shared with other experiments as well as other grids. Our implementation represents information in the XML format and includes tools written in XQuery and XSLT.« less
Architecture for the Next Generation System Management Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallard, Jerome; Lebre, I Adrien; Morin, Christine
2011-01-01
To get more results or greater accuracy, computational scientists execute their applications on distributed computing platforms such as Clusters, Grids and Clouds. These platforms are different in terms of hardware and software resources as well as locality: some span across multiple sites and multiple administrative domains whereas others are limited to a single site/domain. As a consequence, in order to scale their applica- tions up the scientists have to manage technical details for each target platform. From our point of view, this complexity should be hidden from the scientists who, in most cases, would prefer to focus on their researchmore » rather than spending time dealing with platform configuration concerns. In this article, we advocate for a system management framework that aims to automatically setup the whole run-time environment according to the applications needs. The main difference with regards to usual approaches is that they generally only focus on the software layer whereas we address both the hardware and the software expecta- tions through a unique system. For each application, scientists describe their requirements through the definition of a Virtual Platform (VP) and a Virtual System Environment (VSE). Relying on the VP/VSE definitions, the framework is in charge of: (i) the configuration of the physical infrastructure to satisfy the VP requirements, (ii) the setup of the VP, and (iii) the customization of the execution environment (VSE) upon the former VP. We propose a new formalism that the system can rely upon to successfully perform each of these three steps without burdening the user with the specifics of the configuration for the physical resources, and system management tools. This formalism leverages Goldberg s theory for recursive virtual machines by introducing new concepts based on system virtualization (identity, partitioning, aggregation) and emulation (simple, abstraction). This enables the definition of complex VP/VSE configurations without making assumptions about the hardware and the software re- sources. For each requirement, the system executes the corresponding operation with the appropriate management tool. As a proof of concept, we implemented a first prototype that currently interacts with several system management tools (e.g., OSCAR, the Grid 5000 toolkit, and XtreemOS) and that can be easily extended to integrate new resource brokers or cloud systems such as Nimbus, OpenNebula or Eucalyptus for instance.« less
An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU
Xu, Hailong; Cui, Xiaowei; Lu, Mingquan
2016-01-01
Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363
An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.
Xu, Hailong; Cui, Xiaowei; Lu, Mingquan
2016-03-11
Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.
ERIC Educational Resources Information Center
Newton, Jan N.; And Others
Two separate NIE research projects in higher education, closely related in substance and complementary, were undertaken in Oregon in 1973-75. During the first year, the objectives were to: (1) compute and analyze various configurations of student schooling costs and financial resources according to institutional type and to student sex and…
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Lima, Jakelyne; Cerdeira, Louise Teixeira; Bol, Erick; Schneider, Maria Paula Cruz; Silva, Artur; Azevedo, Vasco; Abelém, Antônio Jorge Gomes
2012-01-01
Improvements in genome sequencing techniques have resulted in generation of huge volumes of data. As a consequence of this progress, the genome assembly stage demands even more computational power, since the incoming sequence files contain large amounts of data. To speed up the process, it is often necessary to distribute the workload among a group of machines. However, this requires hardware and software solutions specially configured for this purpose. Grid computing try to simplify this process of aggregate resources, but do not always offer the best performance possible due to heterogeneity and decentralized management of its resources. Thus, it is necessary to develop software that takes into account these peculiarities. In order to achieve this purpose, we developed an algorithm aimed to optimize the functionality of de novo assembly software ABySS in order to optimize its operation in grids. We run ABySS with and without the algorithm we developed in the grid simulator SimGrid. Tests showed that our algorithm is viable, flexible, and scalable even on a heterogeneous environment, which improved the genome assembly time in computational grids without changing its quality. PMID:22461785
Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers
NASA Astrophysics Data System (ADS)
Dreher, Patrick; Scullin, William; Vouk, Mladen
2015-09-01
Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.
NASA Technical Reports Server (NTRS)
Michal, Todd R.
1998-01-01
This study supports the NASA Langley sponsored project aimed at determining the viability of using Euler technology for preliminary design use. The primary objective of this study was to assess the accuracy and efficiency of the Boeing, St. Louis unstructured grid flow field analysis system, consisting of the MACGS grid generation and NASTD flow solver codes. Euler solutions about the Aero Configuration/Weapons Fighter Technology (ACWFT) 1204 aircraft configuration were generated. Several variations of the geometry were investigated including a standard wing, cambered wing, deflected elevon, and deflected body flap. A wide range of flow conditions, most of which were in the non-linear regimes of the flight envelope, including variations in speed (subsonic, transonic, supersonic), angles of attack, and sideslip were investigated. Several flowfield non-linearities were present in these solutions including shock waves, vortical flows and the resulting interactions. The accuracy of this method was evaluated by comparing solutions with test data and Navier-Stokes solutions. The ability to accurately predict lateral-directional characteristics and control effectiveness was investigated by computing solutions with sideslip, and with deflected control surfaces. Problem set up times and computational resource requirements were documented and used to evaluate the efficiency of this approach for use in the fast paced preliminary design environment.
Toward a Dynamically Reconfigurable Computing and Communication System for Small Spacecraft
NASA Technical Reports Server (NTRS)
Kifle, Muli; Andro, Monty; Tran, Quang K.; Fujikawa, Gene; Chu, Pong P.
2003-01-01
Future science missions will require the use of multiple spacecraft with multiple sensor nodes autonomously responding and adapting to a dynamically changing space environment. The acquisition of random scientific events will require rapidly changing network topologies, distributed processing power, and a dynamic resource management strategy. Optimum utilization and configuration of spacecraft communications and navigation resources will be critical in meeting the demand of these stringent mission requirements. There are two important trends to follow with respect to NASA's (National Aeronautics and Space Administration) future scientific missions: the use of multiple satellite systems and the development of an integrated space communications network. Reconfigurable computing and communication systems may enable versatile adaptation of a spacecraft system's resources by dynamic allocation of the processor hardware to perform new operations or to maintain functionality due to malfunctions or hardware faults. Advancements in FPGA (Field Programmable Gate Array) technology make it possible to incorporate major communication and network functionalities in FPGA chips and provide the basis for a dynamically reconfigurable communication system. Advantages of higher computation speeds and accuracy are envisioned with tremendous hardware flexibility to ensure maximum survivability of future science mission spacecraft. This paper discusses the requirements, enabling technologies, and challenges associated with dynamically reconfigurable space communications systems.
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter
2015-12-01
AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.
1999-01-01
Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.
Need for speed: An optimized gridding approach for spatially explicit disease simulations.
Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom
2018-04-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.
Need for speed: An optimized gridding approach for spatially explicit disease simulations
Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom
2018-01-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574
NASA Astrophysics Data System (ADS)
Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang
2010-11-01
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
Reconfigurable Processing Module
NASA Technical Reports Server (NTRS)
Somervill, Kevin; Hodson, Robert; Jones, Robert; Williams, John
2005-01-01
To accommodate a wide spectrum of applications and technologies, NASA s Exploration System's Missions Directorate has called for reconfigurable and modular technologies to support future missions to the moon and Mars. In response, Langley Research Center is leading a program entitled Reconfigurable Scaleable Computing (RSC) that is centered on the development of FPGA-based computing resources in a stackable form factor. This paper details the architecture and implementation of the Reconfigurable Processing Module (RPM), which is the key element of the RSC system. The RPM is an FPGA-based, space-qualified printed circuit assembly leveraging terrestrial/commercial design standards into the space applications domain. The form factor is similar to, and backwards compatible with, the PCI-104 standard utilizing only the PCI interface. The size is expanded to accommodate the required functionality while still better than 30% smaller than a 3U CompactPCI(TradeMark)card and without the overhead of the backplane. The architecture is built around two FPGA devices, one hosting PCI and memory interfaces, and another hosting mission application resources; both of which are connected with a high-speed data bus. The PCI interface FPGA provides access via the PCI bus to onboard SDRAM, flash PROM, and the application resources; both configuration management as well as runtime interaction. The reconfigurable FPGA, referred to as the Application FPGA - or simply "the application" - is a radiation-tolerant Xilinx Virtex-4 FX60 hosting custom application specific logic or soft microprocessor IP. The RPM implements various SEE mitigation techniques including TMR, EDAC, and configuration scrubbing of the reconfigurable FPGA. Prototype hardware and formal modeling techniques are used to explore the performability trade space. These models provide a novel way to calculate quality-of-service performance measures while simultaneously considering fault-related behavior due to SEE soft errors.
Spontaneous Ad Hoc Mobile Cloud Computing Network
Lacuesta, Raquel; Sendra, Sandra; Peñalver, Lourdes
2014-01-01
Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes. PMID:25202715
Spontaneous ad hoc mobile cloud computing network.
Lacuesta, Raquel; Lloret, Jaime; Sendra, Sandra; Peñalver, Lourdes
2014-01-01
Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes.
Numerical simulation of CdTe vertical Bridgman growth
NASA Astrophysics Data System (ADS)
Ouyang, Hong; Shyy, Wei
1997-04-01
Numerical simulation has been conducted for steady-state Bridgman growth of the CdTe crystal with two ampoule configurations, namely, flat base and semi-spherical base. The present model accounts for conduction, convection and radiation, as well as phase change dynamics. The enthalpy formulation for phase change has been incorporated into a pressure-based algorithm with multi-zone curvilinear grid systems. The entire system which consists of the furnace enclosure wall, the encapsulated gas and the ampoule, contains irregularly configured domains. To meet the competing needs of producing accurate solutions with reasonable computing resources, a two-level approach is employed. The present study reveals that although the two ampoule configurations are quite different, their influence on the melt-solid interface shape is modest, and the undesirable concave interface appears in both cases. Since the interface shape strongly depends on thermal conductivities between the melt and the crystal, as well as ampoule wall temperature, accurate prescriptions of materials transport properties and operating environment are crucial for successful numerical predictions.
CloudMan as a platform for tool, data, and analysis distribution
2012-01-01
Background Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. Results CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. Conclusions With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions. PMID:23181507
Resource configuration and abundance affect space use of a cooperatively breeding resident bird
Richard A. Stanton; Dylan C. Kesler; Frank R. Thompson III
2014-01-01
Movement and space use of birds is driven by activities associated with acquiring and maintaining access to critical resources. Thus, the spatial configuration of resources within home ranges should influence bird movements, and resource values should be relative to their locations. We radio-tracked 22 Brown-headed Nuthatches (Sitta pusilla) and...
Nonvolatile reconfigurable sequential logic in a HfO2 resistive random access memory array.
Zhou, Ya-Xiong; Li, Yi; Su, Yu-Ting; Wang, Zhuo-Rui; Shih, Ling-Yi; Chang, Ting-Chang; Chang, Kuan-Chang; Long, Shi-Bing; Sze, Simon M; Miao, Xiang-Shui
2017-05-25
Resistive random access memory (RRAM) based reconfigurable logic provides a temporal programmable dimension to realize Boolean logic functions and is regarded as a promising route to build non-von Neumann computing architecture. In this work, a reconfigurable operation method is proposed to perform nonvolatile sequential logic in a HfO 2 -based RRAM array. Eight kinds of Boolean logic functions can be implemented within the same hardware fabrics. During the logic computing processes, the RRAM devices in an array are flexibly configured in a bipolar or complementary structure. The validity was demonstrated by experimentally implemented NAND and XOR logic functions and a theoretically designed 1-bit full adder. With the trade-off between temporal and spatial computing complexity, our method makes better use of limited computing resources, thus provides an attractive scheme for the construction of logic-in-memory systems.
An Integrated Development Environment for Adiabatic Quantum Programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Bennink, Ryan S
2014-01-01
Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Background Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. Results We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. Conclusions This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation. PMID:26501966
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation.
Optimization Under Uncertainty of Site-Specific Turbine Configurations
NASA Astrophysics Data System (ADS)
Quick, J.; Dykes, K.; Graf, P.; Zahle, F.
2016-09-01
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained with increasing risk aversion on the part of the designer.
10 Management Controller for Time and Space Partitioning Architectures
NASA Astrophysics Data System (ADS)
Lachaize, Jerome; Deredempt, Marie-Helene; Galizzi, Julien
2015-09-01
The Integrated Modular Avionics (IMA) has been industrialized in aeronautical domain to enable the independent qualification of different application softwares from different suppliers on the same generic computer, this latter computer being a single terminal in a deterministic network. This concept allowed to distribute efficiently and transparently the different applications across the network, sizing accurately the HW equipments to embed on the aircraft, through the configuration of the virtual computers and the virtual network. , This concept has been studied for space domain and requirements issued [D04],[D05]. Experiments in the space domain have been done, for the computer level, through ESA and CNES initiatives [D02] [D03]. One possible IMA implementation may use Time and Space Partitioning (TSP) technology. Studies on Time and Space Partitioning [D02] for controlling resources access such as CPU and memories and studies on hardware/software interface standardization [D01] showed that for space domain technologies where I/O components (or IP) do not cover advanced features such as buffering, descriptors or virtualization, CPU overhead in terms of performances is mainly due to shared interface management in the execution platform, and to the high frequency of I/O accesses, these latter leading to an important number of context switches. This paper will present a solution to reduce this execution overhead with an open, modular and configurable controller.
User's guide to the NOZL3D and NOZLIC computer programs
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1980-01-01
Complete FORTRAN listings and running instructions are given for a set of computer programs that perform an implicit numerical solution to the unsteady Navier-Stokes equations to predict the flow characteristics and performance of nonaxisymmetric nozzles. The set includes the NOZL3D program, which performs the flow computations; the NOZLIC program, which sets up the flow field initial conditions for general nozzle configurations, and also generates the computational grid for simple two dimensional and axisymmetric configurations; and the RGRIDD program, which generates the computational grid for complicated three dimensional configurations. The programs are designed specifically for the NASA-Langley CYBER 175 computer, and employ auxiliary disk files for primary data storage. Input instructions and computed results are given for four test cases that include two dimensional, three dimensional, and axisymmetric configurations.
Adaptive sampling strategies with high-throughput molecular dynamics
NASA Astrophysics Data System (ADS)
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
Inviscid Flow Computations of Several Aeroshell Configurations for a '07 Mars Lander
NASA Technical Reports Server (NTRS)
Prabhu, Ramadas K.
2001-01-01
This report documents the results of an inviscid computational study conducted on several candidate aeroshell configurations for a proposed '07 Mars lander. Eleven different configurations were considered, and the aerodynamic characteristics of each of these were computed for a Mach number of 23.7 at 10, 15, and 20 degree angles of attack. The unstructured grid software FELISA with the equilibrium Mars gas option was used for these computations. The pitching moment characteristics and the lift-to-drag ratios at trim angle of attack of each of these configurations were examined to make a selection. The criterion for selection was that the configuration should be longitudinally stable, and should trim at an angle of attack where the L/D is -0.25. Based on the present study, two configurations were selected for further study
Integration of High-Performance Computing into Cloud Computing Services
NASA Astrophysics Data System (ADS)
Vouk, Mladen A.; Sills, Eric; Dreher, Patrick
High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).
Remote control system for high-perfomance computer simulation of crystal growth by the PFC method
NASA Astrophysics Data System (ADS)
Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei
2017-04-01
Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.
Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems
Fonseca Guerra, Gabriel A.; Furber, Steve B.
2017-01-01
Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart. PMID:29311791
Malleable architecture generator for FPGA computing
NASA Astrophysics Data System (ADS)
Gokhale, Maya; Kaba, James; Marks, Aaron; Kim, Jang
1996-10-01
The malleable architecture generator (MARGE) is a tool set that translates high-level parallel C to configuration bit streams for field-programmable logic based computing systems. MARGE creates an application-specific instruction set and generates the custom hardware components required to perform exactly those computations specified by the C program. In contrast to traditional fixed-instruction processors, MARGE's dynamic instruction set creation provides for efficient use of hardware resources. MARGE processes intermediate code in which each operation is annotated by the bit lengths of the operands. Each basic block (sequence of straight line code) is mapped into a single custom instruction which contains all the operations and logic inherent in the block. A synthesis phase maps the operations comprising the instructions into register transfer level structural components and control logic which have been optimized to exploit functional parallelism and function unit reuse. As a final stage, commercial technology-specific tools are used to generate configuration bit streams for the desired target hardware. Technology- specific pre-placed, pre-routed macro blocks are utilized to implement as much of the hardware as possible. MARGE currently supports the Xilinx-based Splash-2 reconfigurable accelerator and National Semiconductor's CLAy-based parallel accelerator, MAPA. The MARGE approach has been demonstrated on systolic applications such as DNA sequence comparison.
NASA Astrophysics Data System (ADS)
Tolba, Khaled Ibrahim; Morgenthal, Guido
2018-01-01
This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.
Looking at Earth from space: Direct readout from environmental satellites
NASA Technical Reports Server (NTRS)
1994-01-01
Direct readout is the capability to acquire information directly from meteorological satellites. Data can be acquired from NASA-developed, National Oceanic and Atmospheric Administration (NOAA)-operated satellites, as well as from other nations' meteorological satellites. By setting up a personal computer-based ground (Earth) station to receive satellite signals, direct readout may be obtained. The electronic satellite signals are displayed as images on the computer screen. The images can display gradients of the Earth's topography and temperature, cloud formations, the flow and direction of winds and water currents, the formation of hurricanes, the occurrence of an eclipse, and a view of Earth's geography. Both visible and infrared images can be obtained. This booklet introduces the satellite systems, ground station configuration, and computer requirements involved in direct readout. Also included are lists of associated resources and vendors.
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1991-01-01
The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.
Rapid exploration of configuration space with diffusion-map-directed molecular dynamics.
Zheng, Wenwei; Rohrdanz, Mary A; Clementi, Cecilia
2013-10-24
The gap between the time scale of interesting behavior in macromolecular systems and that which our computational resources can afford often limits molecular dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named diffusion-map-directed MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses a diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here, we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems, we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300 K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD.
Rapid Exploration of Configuration Space with Diffusion Map-directed-Molecular Dynamics
Zheng, Wenwei; Rohrdanz, Mary A.; Clementi, Cecilia
2013-01-01
The gap between the timescale of interesting behavior in macromolecular systems and that which our computational resources can afford oftentimes limits Molecular Dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named Diffusion Map-directed-MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD. PMID:23865517
Cloud Computing Value Chains: Understanding Businesses and Value Creation in the Cloud
NASA Astrophysics Data System (ADS)
Mohammed, Ashraf Bany; Altmann, Jörn; Hwang, Junseok
Based on the promising developments in Cloud Computing technologies in recent years, commercial computing resource services (e.g. Amazon EC2) or software-as-a-service offerings (e.g. Salesforce. com) came into existence. However, the relatively weak business exploitation, participation, and adoption of other Cloud Computing services remain the main challenges. The vague value structures seem to be hindering business adoption and the creation of sustainable business models around its technology. Using an extensive analyze of existing Cloud business models, Cloud services, stakeholder relations, market configurations and value structures, this Chapter develops a reference model for value chains in the Cloud. Although this model is theoretically based on porter's value chain theory, the proposed Cloud value chain model is upgraded to fit the diversity of business service scenarios in the Cloud computing markets. Using this model, different service scenarios are explained. Our findings suggest new services, business opportunities, and policy practices for realizing more adoption and value creation paths in the Cloud.
Review of the Water Resources Information System of Argentina
Hutchison, N.E.
1987-01-01
A representative of the U.S. Geological Survey traveled to Buenos Aires, Argentina, in November 1986, to discuss water information systems and data bank implementation in the Argentine Government Center for Water Resources Information. Software has been written by Center personnel for a minicomputer to be used to manage inventory (index) data and water quality data. Additional hardware and software have been ordered to upgrade the existing computer. Four microcomputers, statistical and data base management software, and network hardware and software for linking the computers have also been ordered. The Center plans to develop a nationwide distributed data base for Argentina that will include the major regional offices as nodes. Needs for continued development of the water resources information system for Argentina were reviewed. Identified needs include: (1) conducting a requirements analysis to define the content of the data base and insure that all user requirements are met, (2) preparing a plan for the development, implementation, and operation of the data base, and (3) developing a conceptual design to inform all development personnel and users of the basic functionality planned for the system. A quality assurance and configuration management program to provide oversight to the development process was also discussed. (USGS)
Optimization Under Uncertainty of Site-Specific Turbine Configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, J.; Dykes, K.; Graf, P.
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less
Optimization under Uncertainty of Site-Specific Turbine Configurations: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, Julian; Dykes, Katherine; Graf, Peter
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained withmore » increasing risk aversion on the part of the designer.« less
Optimization Under Uncertainty of Site-Specific Turbine Configurations
Quick, J.; Dykes, K.; Graf, P.; ...
2016-10-03
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less
Massive Cloud-Based Big Data Processing for Ocean Sensor Networks and Remote Sensing
NASA Astrophysics Data System (ADS)
Schwehr, K. D.
2017-12-01
Until recently, the work required to integrate and analyze data for global-scale environmental issues was prohibitive both in cost and availability. Traditional desktop processing systems are not able to effectively store and process all the data, and super computer solutions are financially out of the reach of most people. The availability of large-scale cloud computing has created tools that are usable by small groups and individuals regardless of financial resources or locally available computational resources. These systems give scientists and policymakers the ability to see how critical resources are being used across the globe with little or no barrier to entry. Google Earth Engine has the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra, MODIS Aqua, and Global Land Data Assimilation Systems (GLDAS) data catalogs available live online. Here we demonstrate these data to calculate the correlation between lagged chlorophyll and rainfall to identify areas of eutrophication, matching these events to ocean currents from datasets like HYbrid Coordinate Ocean Model (HYCOM) to check if there are constraints from oceanographic configurations. The system can provide addition ground truth with observations from sensor networks like the International Comprehensive Ocean-Atmosphere Data Set / Voluntary Observing Ship (ICOADS/VOS) and Argo floats. This presentation is intended to introduce users to the datasets, programming idioms, and functionality of Earth Engine for large-scale, data-driven oceanography.
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2017-12-01
Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.
A General Water Resources Regulation Software System in China
NASA Astrophysics Data System (ADS)
LEI, X.
2017-12-01
To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.
Reversible simulation of irreversible computation
NASA Astrophysics Data System (ADS)
Li, Ming; Tromp, John; Vitányi, Paul
1998-09-01
Computer computations are generally irreversible while the laws of physics are reversible. This mismatch is penalized by among other things generating excess thermic entropy in the computation. Computing performance has improved to the extent that efficiency degrades unless all algorithms are executed reversibly, for example by a universal reversible simulation of irreversible computations. All known reversible simulations are either space hungry or time hungry. The leanest method was proposed by Bennett and can be analyzed using a simple ‘reversible’ pebble game. The reachable reversible simulation instantaneous descriptions (pebble configurations) of such pebble games are characterized completely. As a corollary we obtain the reversible simulation by Bennett and, moreover, show that it is a space-optimal pebble game. We also introduce irreversible steps and give a theorem on the tradeoff between the number of allowed irreversible steps and the memory gain in the pebble game. In this resource-bounded setting the limited erasing needs to be performed at precise instants during the simulation. The reversible simulation can be modified so that it is applicable also when the simulated computation time is unknown.
NASA Astrophysics Data System (ADS)
Yokoyama, Yoshiaki; Kim, Minseok; Arai, Hiroyuki
At present, when using space-time processing techniques with multiple antennas for mobile radio communication, real-time weight adaptation is necessary. Due to the progress of integrated circuit technology, dedicated processor implementation with ASIC or FPGA can be employed to implement various wireless applications. This paper presents a resource and performance evaluation of the QRD-RLS systolic array processor based on fixed-point CORDIC algorithm with FPGA. In this paper, to save hardware resources, we propose the shared architecture of a complex CORDIC processor. The required precision of internal calculation, the circuit area for the number of antenna elements and wordlength, and the processing speed will be evaluated. The resource estimation provides a possible processor configuration with a current FPGA on the market. Computer simulations assuming a fading channel will show a fast convergence property with a finite number of training symbols. The proposed architecture has also been implemented and its operation was verified by beamforming evaluation through a radio propagation experiment.
Elastic Extension of a CMS Computing Centre Resources on External Clouds
NASA Astrophysics Data System (ADS)
Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.
2016-10-01
After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidelberg, S T; Fitzgerald, K J; Richmond, G H
2006-01-24
There has been substantial development of the Lustre parallel filesystem prior to the configuration described below for this milestone. The initial Lustre filesystems that were deployed were directly connected to the cluster interconnect, i.e. Quadrics Elan3. That is, the clients (OSSes) and Meta-data Servers (MDS) were all directly connected to the cluster's internal high speed interconnect. This configuration serves a single cluster very well, but does not provide sharing of the filesystem among clusters. LLNL funded the development of high-efficiency ''portals router'' code by CFS (the company that develops Lustre) to enable us to move the Lustre servers to amore » GigE-connected network configuration, thus making it possible to connect to the servers from several clusters. With portals routing available, here is what changes: (1) another storage-only cluster is deployed to front the Lustre storage devices (these become the Lustre OSSes and MDS), (2) this ''Lustre cluster'' is attached via GigE connections to a large GigE switch/router cloud, (3) a small number of compute-cluster nodes are designated as ''gateway'' or ''portal router'' nodes, and (4) the portals router nodes are GigE-connected to the switch/router cloud. The Lustre configuration is then changed to reflect the new network paths. A typical example of this is a compute cluster and a related visualization cluster: the compute cluster produces the data (writes it to the Lustre filesystem), and the visualization cluster consumes some of the data (reads it from the Lustre filesystem). This process can be expanded by aggregating several collections of Lustre backend storage resources into one or more ''centralized'' Lustre filesystems, and then arranging to have several ''client'' clusters mount these centralized filesystems. The ''client clusters'' can be any combination of compute, visualization, archiving, or other types of cluster. This milestone demonstrates the operation and performance of a scaled-down version of such a large, centralized, shared Lustre filesystem concept.« less
NASA Astrophysics Data System (ADS)
Brcka, Jozef
2016-07-01
A multi inductively coupled plasma (ICP) system can be used to maintain the plasma uniformity and increase the area processed by a high-density plasma. This article presents a source in two different configurations. The distributed planar multi ICP (DM-ICP) source comprises individual ICP sources that are not overlapped and produce plasma independently. Mutual coupling of the ICPs may affect the distribution of the produced plasma. The integrated multicoil ICP (IMC-ICP) source consists of four low-inductance ICP antennas that are superimposed in an azimuthal manner. The identical geometry of the ICP coils was assumed in this work. Both configurations have highly asymmetric components. A three-dimensional (3D) plasma model of the multicoil ICP configurations with asymmetric features is used to investigate the plasma characteristics in a large chamber and the operation of the sources in inert and reactive gases. The feasibility of the computational calculation, the speed, and the computational resources of the coupled multiphysics solver are investigated in the framework of a large realistic geometry and complex reaction processes. It was determined that additional variables can be used to control large-area plasmas. Both configurations can form a plasma, that azimuthally moves in a controlled manner, the so-called “sweeping mode” (SM) or “polyphase mode” (PPM), and thus they have the potential for large-area and high-density plasma applications. The operation in the azimuthal mode has the potential to adjust the plasma distribution, the reaction chemistry, and increase or modulate the production of the radicals. The intrinsic asymmetry of the individual coils and their combined operation were investigated within a source assembly primarily in argon and CO gases. Limited investigations were also performed on operation in CH4 gas. The plasma parameters and the resulting chemistry are affected by the geometrical relation between individual antennas. The aim of this work is to incorporate the technological, computational, dimensional scaling, and reaction chemistry aspects of the plasma under one computational framework. The 3D simulation is utilized to geometrically scale up the reactive plasma that is produced by multiple ICP sources.
JIP: Java image processing on the Internet
NASA Astrophysics Data System (ADS)
Wang, Dongyan; Lin, Bo; Zhang, Jun
1998-12-01
In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.
Utilization of Virtual Server Technology in Mission Operations
NASA Technical Reports Server (NTRS)
Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.
2010-01-01
Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.
Virtualization in the Operations Environments
NASA Technical Reports Server (NTRS)
Pitts, Lee; Lankford, Kim; Felton, Larry; Pruitt, Robert
2010-01-01
Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
NASA Technical Reports Server (NTRS)
Prabhu, Ramadas K.; Sutton, Kenneth (Technical Monitor)
2001-01-01
This report documents the results of a study conducted to compute the inviscid longitudinal aerodynamic characteristics of three aeroshell configurations of the proposed '07 Mars lander. This was done in support of the activity to design a smart lander for the proposed '07 Mars mission. In addition to the three configurations with tabs designated as the shelf, the canted, and the Ames, the baseline configuration (without tab) was also studied. The unstructured grid inviscid CFD software FELISA was used, and the longitudinal aerodynamic characteristics of the four configurations were computed for Mach number of 2.3, 2.7, 3.5, and 4.5, and for an angle of attack range of -4 to 20 degrees. Wind tunnel tests had been conducted on scale models of these four configurations in the Unitary Plan Wind Tunnel, NASA Langley Research Center. Present computational results are compared with the data from these tests. Some differences are noticed between the two results, particularly at the lower Mach numbers. These differences are attributed to the pressures acting on the aft body. Most of the present computations were done on the forebody only. Additional computations were done on the full body (forebody and afterbody) for the baseline and the Shelf configurations. Results of some computations done (to simulate flight conditions) with the Mars gas option and with an effective gamma are also included.
An ontology-based semantic configuration approach to constructing Data as a Service for enterprises
NASA Astrophysics Data System (ADS)
Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi
2016-03-01
To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.
Pyglidein - A Simple HTCondor Glidein Service
NASA Astrophysics Data System (ADS)
Schultz, D.; Riedel, B.; Merino, G.
2017-10-01
A major challenge for data processing and analysis at the IceCube Neutrino Observatory presents itself in connecting a large set of individual clusters together to form a computing grid. Most of these clusters do not provide a “standard” grid interface. Using a local account on each submit machine, HTCondor glideins can be submitted to virtually any type of scheduler. The glideins then connect back to a main HTCondor pool, where jobs can run normally with no special syntax. To respond to dynamic load, a simple server advertises the number of idle jobs in the queue and the resources they request. The submit script can query this server to optimize glideins to what is needed, or not submit if there is no demand. Configuring HTCondor dynamic slots in the glideins allows us to efficiently handle varying memory requirements as well as whole-node jobs. One step of the IceCube simulation chain, photon propagation in the ice, heavily relies on GPUs for faster execution. Therefore, one important requirement for any workload management system in IceCube is to handle GPU resources properly. Within the pyglidein system, we have successfully configured HTCondor glideins to use any GPU allocated to it, with jobs using the standard HTCondor GPU syntax to request and use a GPU. This mechanism allows us to seamlessly integrate our local GPU cluster with remote non-Grid GPU clusters, including specially allocated resources at XSEDE supercomputers.
Generic algorithms for high performance scalable geocomputing
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2016-04-01
During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system. This contrasts with practices in which code for distributing of compute tasks is mixed with model-specific code, and results in a better maintainable model. For flexibility and efficiency, the algorithms are configurable at compile-time with the respect to the following aspects: data type, value type, no-data handling, input value domain handling, and output value range handling. This makes the algorithms usable in very different contexts, without the need for making intrusive changes to existing models when using them. Applications that benefit from using the Fern library include the construction of forward simulation models in (global) hydrology (e.g. PCR-GLOBWB (Van Beek et al. 2011)), ecology, geomorphology, or land use change (e.g. PLUC (Verstegen et al. 2014)) and manipulation of hyper-resolution land surface data such as digital elevation models and remote sensing data. Using the Fern library, we have also created an add-on to the PCRaster Python Framework (Karssenberg et al. 2010) allowing its users to speed up their spatio-temporal models, sometimes by changing just a single line of Python code in their model. In our presentation we will give an overview of the design of the algorithms, providing examples of different contexts where they can be used to replace existing sequential algorithms, including the PCRaster environmental modeling software (www.pcraster.eu). We will show how the algorithms can be configured to behave differently when necessary. References Karssenberg, D., Schmitz, O., Salamon, P., De Jong, K. and Bierkens, M.F.P., 2010, A software framework for construction of process-based stochastic spatio-temporal models and data assimilation. Environmental Modelling & Software, 25, pp. 489-502, Link. Best Paper Award 2010: Software and Decision Support. Van Beek, L. P. H., Y. Wada, and M. F. P. Bierkens. 2011. Global monthly water stress: 1. Water balance and water availability. Water Resources Research. 47. Verstegen, J. A., D. Karssenberg, F. van der Hilst, and A. P. C. Faaij. 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53:121-136.
NASA Technical Reports Server (NTRS)
Chan, William M.
1995-01-01
Algorithms and computer code developments were performed for the overset grid approach to solving computational fluid dynamics problems. The techniques developed are applicable to compressible Navier-Stokes flow for any general complex configurations. The computer codes developed were tested on different complex configurations with the Space Shuttle launch vehicle configuration as the primary test bed. General, efficient and user-friendly codes were produced for grid generation, flow solution and force and moment computation.
Experience on HTCondor batch system for HEP and other research fields at KISTI-GSDC
NASA Astrophysics Data System (ADS)
Ahn, S. U.; Jaikar, A.; Kong, B.; Yeo, I.; Bae, S.; Kim, J.
2017-10-01
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) located at Daejeon in South Korea is the unique datacenter in the country which helps with its computing resources fundamental research fields dealing with the large-scale of data. For historical reason, it has run Torque batch system while recently it starts running HTCondor for new systems. Having different kinds of batch systems implies inefficiency in terms of resource management and utilization. We conducted a research on resource management with HTCondor for several user scenarios corresponding to the user environments that currently GSDC supports. A recent research on the resource usage patterns at GSDC is considered in this research to build the possible user scenarios. Checkpointing and Super-Collector model of HTCondor give us more efficient and flexible way to manage resources and Grid Gate provided by HTCondor helps to interface with the Grid environment. In this paper, the overview on the essential features of HTCondor exploited in this work is described and the practical examples for HTCondor cluster configuration in our cases are presented.
Vandergoot, Christopher S.; Kocovsky, Patrick M.; Brenden, Travis O.; Liu, Weihai
2011-01-01
We used length frequencies of captured walleyes Sander vitreus to indirectly estimate and compare selectivity between two experimental gill-net configurations used to sample fish in Lake Erie: (1) a multifilament configuration currently used by the Ohio Department of Natural Resources (ODNR) with stretched-measure mesh sizes ranging from 51 to 127 mm and a constant filament diameter (0.37 mm); and (2) a monofilament configuration with mesh sizes ranging from 38 to 178 mm and varying filament diameter (range = 0.20–0.33 mm). Paired sampling with the two configurations revealed that the catch of walleyes smaller than 250 mm and larger than 600 mm was greater in the monofilament configuration than in the multifilament configuration, but the catch of 250–600-mm fish was greater in the multifilament configuration. Binormal selectivity functions yielded the best fit to observed walleye catches for both gill-net configurations based on model deviances. Incorporation of deviation terms in the binormal selectivity functions (i.e., to relax the assumption of geometric similarity) further improved the fit to observed catches. The final fitted selectivity functions produced results similar to those from the length-based catch comparisons: the monofilament configuration had greater selectivity for small and large walleyes and the multifilament configuration had greater selectivity for mid-sized walleyes. Computer simulations that incorporated the fitted binormal selectivity functions indicated that both nets were likely to result in some bias in age composition estimates and that the degree of bias would ultimately be determined by the underlying condition, mortality rate, and growth rate of the Lake Erie walleye population. Before the ODNR switches its survey gear, additional comparisons of the different gill-net configurations, such as fishing the net pairs across a greater range of depths and at more locations in the lake, should be conducted to maintain congruence in the fishery-independent survey time series.
Unidata cyberinfrastructure in the cloud: A progress report
NASA Astrophysics Data System (ADS)
Ramamurthy, Mohan
2016-04-01
Data services, software, and committed support are critical components of geosciences cyber-infrastructure that can help scientists address problems of unprecedented complexity, scale, and scope. Unidata is currently working on innovative ideas, new paradigms, and novel techniques to complement and extend its offerings. Our goal is to empower users so that they can tackle major, heretofore difficult problems. Unidata recognizes that its products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. To realize the above vision, Unidata is working toward: * Providing access to many types of data from a cloud (e.g., TDS, RAMADDA and EDEX); * Deploying data-proximate tools to easily process, analyze and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Fostering partnerships with NOAA and public cloud vendors (e.g., Amazon) to harness their capabilities and resources for the benefit of the academic community.
Parallel Application Performance on Two Generations of Intel Xeon HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Christopher H.; Long, Hai; Sides, Scott
2015-10-15
Two next-generation node configurations hosting the Haswell microarchitecture were tested with a suite of microbenchmarks and application examples, and compared with a current Ivy Bridge production node on NREL" tm s Peregrine high-performance computing cluster. A primary conclusion from this study is that the additional cores are of little value to individual task performance--limitations to application parallelism, or resource contention among concurrently running but independent tasks, limits effective utilization of these added cores. Hyperthreading generally impacts throughput negatively, but can improve performance in the absence of detailed attention to runtime workflow configuration. The observations offer some guidance to procurement ofmore » future HPC systems at NREL. First, raw core count must be balanced with available resources, particularly memory bandwidth. Balance-of-system will determine value more than processor capability alone. Second, hyperthreading continues to be largely irrelevant to the workloads that are commonly seen, and were tested here, at NREL. Finally, perhaps the most impactful enhancement to productivity might occur through enabling multiple concurrent jobs per node. Given the right type and size of workload, more may be achieved by doing many slow things at once, than fast things in order.« less
Design and implementation of a Windows NT network to support CNC activities
NASA Technical Reports Server (NTRS)
Shearrow, C. A.
1996-01-01
The Manufacturing, Materials, & Processes Technology Division is undergoing dramatic changes to bring it's manufacturing practices current with today's technological revolution. The Division is developing Computer Automated Design and Computer Automated Manufacturing (CAD/CAM) abilities. The development of resource tracking is underway in the form of an accounting software package called Infisy. These two efforts will bring the division into the 1980's in relationship to manufacturing processes. Computer Integrated Manufacturing (CIM) is the final phase of change to be implemented. This document is a qualitative study and application of a CIM application capable of finishing the changes necessary to bring the manufacturing practices into the 1990's. The documentation provided in this qualitative research effort includes discovery of the current status of manufacturing in the Manufacturing, Materials, & Processes Technology Division including the software, hardware, network and mode of operation. The proposed direction of research included a network design, computers to be used, software to be used, machine to computer connections, estimate a timeline for implementation, and a cost estimate. Recommendation for the division's improvement include action to be taken, software to utilize, and computer configurations.
System and Method for Providing a Climate Data Persistence Service
NASA Technical Reports Server (NTRS)
Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)
2018-01-01
A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.
System capacity and economic modeling computer tool for satellite mobile communications systems
NASA Technical Reports Server (NTRS)
Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.
1988-01-01
A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.
Reference Solutions for Benchmark Turbulent Flows in Three Dimensions
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Pandya, Mohagna J.; Rumsey, Christopher L.
2016-01-01
A grid convergence study is performed to establish benchmark solutions for turbulent flows in three dimensions (3D) in support of turbulence-model verification campaign at the Turbulence Modeling Resource (TMR) website. The three benchmark cases are subsonic flows around a 3D bump and a hemisphere-cylinder configuration and a supersonic internal flow through a square duct. Reference solutions are computed for Reynolds Averaged Navier Stokes equations with the Spalart-Allmaras turbulence model using a linear eddy-viscosity model for the external flows and a nonlinear eddy-viscosity model based on a quadratic constitutive relation for the internal flow. The study involves three widely-used practical computational fluid dynamics codes developed and supported at NASA Langley Research Center: FUN3D, USM3D, and CFL3D. Reference steady-state solutions computed with these three codes on families of consistently refined grids are presented. Grid-to-grid and code-to-code variations are described in detail.
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-01-01
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-04-25
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.
Design and Computational/Experimental Analysis of Low Sonic Boom Configurations
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.
1999-01-01
Recent studies have shown that inviscid CFD codes combined with a planar extrapolation method give accurate sonic boom pressure signatures at distances greater than one body length from supersonic configurations if either adapted grids swept at the approximate Mach angle or very dense non-adapted grids are used. The validation of CFD for computing sonic boom pressure signatures provided the confidence needed to undertake the design of new supersonic transport configurations with low sonic boom characteristics. An aircraft synthesis code in combination with CFD and an extrapolation method were used to close the design. The principal configuration of this study is designated LBWT (Low Boom Wing Tail) and has a highly swept cranked arrow wing with conventional tails, and was designed to accommodate either 3 or 4 engines. The complete configuration including nacelles and boundary layer diverters was evaluated using the AIRPLANE code. This computer program solves the Euler equations on an unstructured tetrahedral mesh. Computations and wind tunnel data for the LBWT and two other low boom configurations designed at NASA Ames Research Center are presented. The two additional configurations are included to provide a basis for comparing the performance and sonic boom level of the LBWT with contemporary low boom designs and to give a broader experiment/CFD correlation study. The computational pressure signatures for the three configurations are contrasted with on-ground-track near-field experimental data from the NASA Ames 9x7 Foot Supersonic Wind Tunnel. Computed pressure signatures for the LBWT are also compared with experiment at approximately 15 degrees off ground track.
The Radiology Resident iPad Toolbox: an educational and clinical tool for radiology residents.
Sharpe, Emerson E; Kendrick, Michael; Strickland, Colin; Dodd, Gerald D
2013-07-01
Tablet computing and mobile resources are the hot topics in technology today, with that interest spilling into the medical field. To improve resident education, a fully configured iPad, referred to as the "Radiology Resident iPad Toolbox," was created and implemented at the University of Colorado. The goal was to create a portable device with comprehensive educational, clinical, and communication tools that would contain all necessary resources for an entire 4-year radiology residency. The device was distributed to a total of 34 radiology residents (8 first-year residents, 8 second-year residents, 9 third-year residents, and 9 fourth-year residents). This article describes the process used to develop and deploy the device, provides a distillation of useful applications and resources decided upon after extensive evaluation, and assesses the impact this device had on resident education. The Radiology Resident iPad Toolbox is a cost-effective, portable, educational instrument that has increased studying efficiency; improved access to study materials such as books, radiology cases, lectures, and web-based resources; and increased interactivity in educational conferences and lectures through the use of audience-response software, with questions geared toward the new ABR board format. This preconfigured tablet fully embraces the technology shift into mobile computing and represents a paradigm shift in educational strategy. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Continuous Security and Configuration Monitoring of HPC Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Lomeli, H. D.; Bertsch, A. D.; Fox, D. M.
Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration managementmore » systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking can be managed from one central location.« less
Discrete-event computer simulation methods in the optimisation of a physiotherapy clinic.
Villamizar, J R; Coelli, F C; Pereira, W C A; Almeida, R M V R
2011-03-01
To develop a computer model to analyse the performance of a standard physiotherapy clinic in the city of Rio de Janeiro, Brazil. The clinic receives an average of 80 patients/day and offers 10 treatment modalities. Details of patient procedures and treatment routines were obtained from direct interviews with clinic staff. Additional data (e.g. arrival time, treatment duration, length of stay) were obtained for 2000 patients from the clinic's computerised records from November 2005 to February 2006. A discrete-event model was used to simulate the clinic's operational routine. The initial model was built to reproduce the actual configuration of the clinic, and five simulation strategies were subsequently implemented, representing changes in the number of patients, human resources of the clinic and the scheduling of patient arrivals. Findings indicated that the actual clinic configuration could accept up to 89 patients/day, with an average length of stay of 119minutes and an average patient waiting time of 3minutes. When the scheduling of patient arrivals was increased to an interval of 6.5minutes, maximum attendance increased to 114 patients/day. For the actual clinic configuration, optimal staffing consisted of three physiotherapists and 12 students. According to the simulation, the same 89 patients could be attended when the infrastructure was decreased to five kinesiotherapy rooms, two cardiotherapy rooms and three global postural reeducation rooms. The model was able to evaluate the capacity of the actual clinic configuration, and additional simulation strategies indicated how the operation of the clinic depended on the main study variables. Copyright © 2010 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
1996-05-01
The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following are the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors andmore » visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an adjustment history of changes on selected fields« less
Assessment of CFD Estimation of Aerodynamic Characteristics of Basic Reusable Rocket Configurations
NASA Astrophysics Data System (ADS)
Fujimoto, Keiichiro; Fujii, Kozo
Flow-fields around the basic SSTO-rocket configurations are numerically simulated by the Reynolds-averaged Navier-Stokes (RANS) computations. Simulations of the Apollo-like configuration is first carried out, where the results are compared with NASA experiments and the prediction ability of the RANS simulation is discussed. The angle of attack of the freestream ranges from 0° to 180° and the freestream Mach number ranges from 0.7 to 2.0. Computed aerodynamic coefficients for the Apollo-like configuration agree well with the experiments under a wide range of flow conditions. The flow simulations around the slender Apollo-type configuration are carried out next and the results are compared with the experiments. Computed aerodynamic coefficients also agree well with the experiments. Flow-fields are dominated by the three-dimensional massively separated flow, which should be captured for accurate aerodynamic prediction. Grid refinement effects on the computed aerodynamic coefficients are investigated comprehensively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, W.; Green, J.
2001-01-01
The purpose of this research was to determine the optimal configuration of home power systems relevant to different regions in the United States. The hypothesis was that, regardless of region, the optimal system would be a hybrid incorporating wind technology, versus a photovoltaic hybrid system without the use of wind technology. The method used in this research was HOMER, the Hybrid Optimization Model for Electric Renewables. HOMER is a computer program that optimizes electrical configurations under user-defined circumstances. According to HOMER, the optimal system for the four regions studied (Kansas, Massachusetts, Oregon, and Arizona) was a hybrid incorporating wind technology.more » The cost differences between these regions, however, were dependent upon regional renewable resources. Future studies will be necessary, as it is difficult to estimate meteorological impacts for other regions.« less
ESL Students' Computer-Mediated Communication Practices: Context Configuration
ERIC Educational Resources Information Center
Shin, Dong-Shin
2006-01-01
This paper examines how context is configured in ESL students' language learning practices through computer-mediated communication (CMC). Specifically, I focus on how a group of ESL students jointly constructed the context of their CMC activities through interactional patterns and norms, and how configured affordances within the CMC environment…
Reconfigurable environmentally adaptive computing
NASA Technical Reports Server (NTRS)
Coxe, Robin L. (Inventor); Galica, Gary E. (Inventor)
2008-01-01
Described are methods and apparatus, including computer program products, for reconfigurable environmentally adaptive computing technology. An environmental signal representative of an external environmental condition is received. A processing configuration is automatically selected, based on the environmental signal, from a plurality of processing configurations. A reconfigurable processing element is reconfigured to operate according to the selected processing configuration. In some examples, the environmental condition is detected and the environmental signal is generated based on the detected condition.
An Inviscid Computational Study of the Space Shuttle Orbiter and Several Damaged Configurations
NASA Technical Reports Server (NTRS)
Prabhu, Ramadas K.; Merski, N. Ronald (Technical Monitor)
2004-01-01
Inviscid aerodynamic characteristics of the Space Shuttle Orbiter were computed in support of the Columbia Accident Investigation. The unstructured grid software FELISA was used and computations were done using freestream conditions corresponding to those in the NASA Langley 20-Inch Mach 6 CF4 tunnel test section. The angle of attack was held constant at 40 degrees. The baseline (undamaged) configuration and a large number of damaged configurations of the Orbiter were studied. Most of the computations were done on a half model. However, one set of computations was done using the full-model to study the effect of sideslip. The differences in the aerodynamic coefficients for the damaged and the baseline configurations were computed. Simultaneously with the computation reported here, tests were being done on a scale model of the Orbiter in the 20-Inch Mach 6 CF4 tunnel to measure the deltas . The present computations complemented the CF4 tunnel test, and provided aerodynamic coefficients of the Orbiter as well as its components. Further, they also provided details of the flow field.
Systems and methods for rapid processing and storage of data
Stalzer, Mark A.
2017-01-24
Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobson, D; Churby, A; Krieger, E
2011-07-25
The National Ignition Facility (NIF) is the world's largest laser composed of millions of individual parts brought together to form one massive assembly. Maintaining control of the physical definition, status and configuration of this structure is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. The NIF business application suite of software provides the means to effectively manage the definition, build, operation, maintenance and configuration control of all components of the National Ignition Facility. State of the art Computer Aided Design software applications are used to generate a virtualmore » model and assemblies. Engineering bills of material are controlled through the Enterprise Configuration Management System. This data structure is passed to the Enterprise Resource Planning system to create a manufacturing bill of material. Specific parts are serialized then tracked along their entire lifecycle providing visibility to the location and status of optical, target and diagnostic components that are key to assessing pre-shot machine readiness. Nearly forty thousand items requiring preventive, reactive and calibration maintenance are tracked through the System Maintenance & Reliability Tracking application to ensure proper operation. Radiological tracking applications ensure proper stewardship of radiological and hazardous materials and help provide a safe working environment for NIF personnel.« less
Cloud-based processing of multi-spectral imaging data
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David
2017-03-01
Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.
Automated Boundary Conditions for Wind Tunnel Simulations
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee
2018-01-01
Computational fluid dynamic (CFD) simulations of models tested in wind tunnels require a high level of fidelity and accuracy particularly for the purposes of CFD validation efforts. Considerable effort is required to ensure the proper characterization of both the physical geometry of the wind tunnel and recreating the correct flow conditions inside the wind tunnel. The typical trial-and-error effort used for determining the boundary condition values for a particular tunnel configuration are time and computer resource intensive. This paper describes a method for calculating and updating the back pressure boundary condition in wind tunnel simulations by using a proportional-integral-derivative controller. The controller methodology and equations are discussed, and simulations using the controller to set a tunnel Mach number in the NASA Langley 14- by 22-Foot Subsonic Tunnel are demonstrated.
Computer-Drawn Field Lines and Potential Surfaces for a Wide Range of Field Configurations
ERIC Educational Resources Information Center
Brandt, Siegmund; Schneider, Hermann
1976-01-01
Describes a computer program that computes field lines and equipotential surfaces for a wide range of field configurations. Presents the mathematical technique and details of the program, the input data, and different modes of graphical representation. (MLH)
System and method of designing a load bearing layer of an inflatable vessel
NASA Technical Reports Server (NTRS)
Spexarth, Gary R. (Inventor)
2007-01-01
A computer-implemented method is provided for designing a restraint layer of an inflatable vessel. The restraint layer is inflatable from an initial uninflated configuration to an inflated configuration and is constructed from a plurality of interfacing longitudinal straps and hoop straps. The method involves providing computer processing means (e.g., to receive user inputs, perform calculations, and output results) and utilizing this computer processing means to implement a plurality of subsequent design steps. The computer processing means is utilized to input the load requirements of the inflated restraint layer and to specify an inflated configuration of the restraint layer. This includes specifying a desired design gap between pairs of adjacent longitudinal or hoop straps, whereby the adjacent straps interface with a plurality of transversely extending hoop or longitudinal straps at a plurality of intersections. Furthermore, an initial uninflated configuration of the restraint layer that is inflatable to achieve the specified inflated configuration is determined. This includes calculating a manufacturing gap between pairs of adjacent longitudinal or hoop straps that correspond to the specified desired gap in the inflated configuration of the restraint layer.
Karpievitch, Yuliya V; Almeida, Jonas S
2006-01-01
Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707
Karpievitch, Yuliya V; Almeida, Jonas S
2006-03-15
Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.
BioVLAB-MMIA: a cloud environment for microRNA and mRNA integrated analysis (MMIA) on Amazon EC2.
Lee, Hyungro; Yang, Youngik; Chae, Heejoon; Nam, Seungyoon; Choi, Donghoon; Tangchaisin, Patanachai; Herath, Chathura; Marru, Suresh; Nephew, Kenneth P; Kim, Sun
2012-09-01
MicroRNAs, by regulating the expression of hundreds of target genes, play critical roles in developmental biology and the etiology of numerous diseases, including cancer. As a vast amount of microRNA expression profile data are now publicly available, the integration of microRNA expression data sets with gene expression profiles is a key research problem in life science research. However, the ability to conduct genome-wide microRNA-mRNA (gene) integration currently requires sophisticated, high-end informatics tools, significant expertise in bioinformatics and computer science to carry out the complex integration analysis. In addition, increased computing infrastructure capabilities are essential in order to accommodate large data sets. In this study, we have extended the BioVLAB cloud workbench to develop an environment for the integrated analysis of microRNA and mRNA expression data, named BioVLAB-MMIA. The workbench facilitates computations on the Amazon EC2 and S3 resources orchestrated by the XBaya Workflow Suite. The advantages of BioVLAB-MMIA over the web-based MMIA system include: 1) readily expanded as new computational tools become available; 2) easily modifiable by re-configuring graphic icons in the workflow; 3) on-demand cloud computing resources can be used on an "as needed" basis; 4) distributed orchestration supports complex and long running workflows asynchronously. We believe that BioVLAB-MMIA will be an easy-to-use computing environment for researchers who plan to perform genome-wide microRNA-mRNA (gene) integrated analysis tasks.
Block-structured grids for complex aerodynamic configurations: Current status
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Sanetrik, Mark D.; Parlette, Edward B.
1995-01-01
The status of CFD methods based on the use of block-structured grids for analyzing viscous flows over complex configurations is examined. The objective of the present study is to make a realistic assessment of the usability of such grids for routine computations typically encountered in the aerospace industry. It is recognized at the very outset that the total turnaround time, from the moment the configuration is identified until the computational results have been obtained and postprocessed, is more important than just the computational time. Pertinent examples will be cited to demonstrate the feasibility of solving flow over practical configurations of current interest on block-structured grids.
"One-Stop Shopping" for Ocean Remote-Sensing and Model Data
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook
2006-01-01
OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for
High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.
Unidata Cyberinfrastructure in the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.; Young, J. W.
2016-12-01
Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.
Stability and Scalability of the CMS Global Pool: Pushing HTCondor and GlideinWMS to New Limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balcas, J.; Bockelman, B.; Hufnagel, D.
The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such asmore » multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.« less
Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits
NASA Astrophysics Data System (ADS)
Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Aftab Khan, F.; Larson, K.; Letts, J.; Marra da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.
2017-10-01
The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.
Applying Utility Functions to Adaptation Planning for Home Automation Applications
NASA Astrophysics Data System (ADS)
Bratskas, Pyrros; Paspallis, Nearchos; Kakousis, Konstantinos; Papadopoulos, George A.
A pervasive computing environment typically comprises multiple embedded devices that may interact together and with mobile users. These users are part of the environment, and they experience it through a variety of devices embedded in the environment. This perception involves technologies which may be heterogeneous, pervasive, and dynamic. Due to the highly dynamic properties of such environments, the software systems running on them have to face problems such as user mobility, service failures, or resource and goal changes which may happen in an unpredictable manner. To cope with these problems, such systems must be autonomous and self-managed. In this chapter we deal with a special kind of a ubiquitous environment, a smart home environment, and introduce a user-preference-based model for adaptation planning. The model, which dynamically forms a set of configuration plans for resources, reasons automatically and autonomously, based on utility functions, on which plan is likely to best achieve the user's goals with respect to resource availability and user needs.
A Scalable, Out-of-Band Diagnostics Architecture for International Space Station Systems Support
NASA Technical Reports Server (NTRS)
Fletcher, Daryl P.; Alena, Rick; Clancy, Daniel (Technical Monitor)
2002-01-01
The computational infrastructure of the International Space Station (ISS) is a dynamic system that supports multiple vehicle subsystems such as Caution and Warning, Electrical Power Systems and Command and Data Handling (C&DH), as well as scientific payloads of varying size and complexity. The dynamic nature of the ISS configuration coupled with the increased demand for payload support places a significant burden on the inherently resource constrained computational infrastructure of the ISS. Onboard system diagnostics applications are hosted on computers that are elements of the avionics network while ground-based diagnostic applications receive only a subset of available telemetry, down-linked via S-band communications. In this paper we propose a scalable, out-of-band diagnostics architecture for ISS systems support that uses a read-only connection for C&DH data acquisition, which provides a lower cost of deployment and maintenance (versus a higher criticality readwrite connection). The diagnostics processing burden is off-loaded from the avionics network to elements of the on-board LAN that have a lower overall cost of operation and increased computational capacity. A superset of diagnostic data, richer in content than the configured telemetry, is made available to Advanced Diagnostic System (ADS) clients running on wireless handheld devices, affording the crew greater mobility for troubleshooting and providing improved insight into vehicle state. The superset of diagnostic data is made available to the ground in near real-time via an out-of band downlink, providing a high level of fidelity between vehicle state and test, training and operational facilities on the ground.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siauw, Timmy; Cunha, Adam; Berenson, Dmitry
Purpose: In this study, the authors introduce skew line needle configurations for high dose rate (HDR) brachytherapy and needle planning by integer program (NPIP), a computational method for generating these configurations. NPIP generates needle configurations that are specific to the anatomy of the patient, avoid critical structures near the penile bulb and other healthy structures, and avoid needle collisions inside the body. Methods: NPIP consisted of three major components: a method for generating a set of candidate needles, a needle selection component that chose a candidate needle subset to be inserted, and a dose planner for verifying that the finalmore » needle configuration could meet dose objectives. NPIP was used to compute needle configurations for prostate cancer data sets from patients previously treated at our clinic. NPIP took two user-parameters: a number of candidate needles, and needle coverage radius, {delta}. The candidate needle set consisted of 5000 needles, and a range of {delta} values was used to compute different needle configurations for each patient. Dose plans were computed for each needle configuration. The number of needles generated and dosimetry were analyzed and compared to the physician implant. Results: NPIP computed at least one needle configuration for every patient that met dose objectives, avoided healthy structures and needle collisions, and used as many or fewer needles than standard practice. These needle configurations corresponded to a narrow range of {delta} values, which could be used as default values if this system is used in practice. The average end-to-end runtime for this implementation of NPIP was 286 s, but there was a wide variation from case to case. Conclusions: The authors have shown that NPIP can automatically generate skew line needle configurations with the aforementioned properties, and that given the correct input parameters, NPIP can generate needle configurations which meet dose objectives and use as many or fewer needles than the current HDR brachytherapy workflow. Combined with robot assisted brachytherapy, this system has the potential to reduce side effects associated with treatment. A physical trial should be done to test the implant feasibility of NPIP needle configurations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chainer, Timothy J.; Parida, Pritish R.
Systems and methods for cooling include one or more computing structure, an inter-structure liquid cooling system that includes valves configured to selectively provide liquid coolant to the one or more computing structures; a heat rejection system that includes one or more heat rejection units configured to cool liquid coolant; and one or more liquid-to-liquid heat exchangers that include valves configured to selectively transfer heat from liquid coolant in the inter-structure liquid cooling system to liquid coolant in the heat rejection system. Each computing structure further includes one or more liquid-cooled servers; and an intra-structure liquid cooling system that has valvesmore » configured to selectively provide liquid coolant to the one or more liquid-cooled servers.« less
Provisioning cooling elements for chillerless data centers
Chainer, Timothy J.; Parida, Pritish R.
2016-12-13
Systems and methods for cooling include one or more computing structure, an inter-structure liquid cooling system that includes valves configured to selectively provide liquid coolant to the one or more computing structures; a heat rejection system that includes one or more heat rejection units configured to cool liquid coolant; and one or more liquid-to-liquid heat exchangers that include valves configured to selectively transfer heat from liquid coolant in the inter-structure liquid cooling system to liquid coolant in the heat rejection system. Each computing structure further includes one or more liquid-cooled servers; and an intra-structure liquid cooling system that has valves configured to selectively provide liquid coolant to the one or more liquid-cooled servers.
NASA Technical Reports Server (NTRS)
Craidon, C. B.
1975-01-01
A computer program that uses a three-dimensional geometric technique for fitting a smooth surface to the component parts of an aircraft configuration is presented. The resulting surface equations are useful in performing various kinds of calculations in which a three-dimensional mathematical description is necessary. Programs options may be used to compute information for three-view and orthographic projections of the configuration as well as cross-section plots at any orientation through the configuration. The aircraft geometry input section of the program may be easily replaced with a surface point description in a different form so that the program could be of use for any three-dimensional surface equations.
Commissioning the CERN IT Agile Infrastructure with experiment workloads
NASA Astrophysics Data System (ADS)
Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia
2014-06-01
In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.
Virtual Labs (Science Gateways) as platforms for Free and Open Source Science
NASA Astrophysics Data System (ADS)
Lescinsky, David; Car, Nicholas; Fraser, Ryan; Friedrich, Carsten; Kemp, Carina; Squire, Geoffrey
2016-04-01
The Free and Open Source Software (FOSS) movement promotes community engagement in software development, as well as provides access to a range of sophisticated technologies that would be prohibitively expensive if obtained commercially. However, as geoinformatics and eResearch tools and services become more dispersed, it becomes more complicated to identify and interface between the many required components. Virtual Laboratories (VLs, also known as Science Gateways) simplify the management and coordination of these components by providing a platform linking many, if not all, of the steps in particular scientific processes. These enable scientists to focus on their science, rather than the underlying supporting technologies. We describe a modular, open source, VL infrastructure that can be reconfigured to create VLs for a wide range of disciplines. Development of this infrastructure has been led by CSIRO in collaboration with Geoscience Australia and the National Computational Infrastructure (NCI) with support from the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service (ANDS). Initially, the infrastructure was developed to support the Virtual Geophysical Laboratory (VGL), and has subsequently been repurposed to create the Virtual Hazards Impact and Risk Laboratory (VHIRL) and the reconfigured Australian National Virtual Geophysics Laboratory (ANVGL). During each step of development, new capabilities and services have been added and/or enhanced. We plan on continuing to follow this model using a shared, community code base. The VL platform facilitates transparent and reproducible science by providing access to both the data and methodologies used during scientific investigations. This is further enhanced by the ability to set up and run investigations using computational resources accessed through the VL. Data is accessed using registries pointing to catalogues within public data repositories (notably including the NCI National Environmental Research Data Interoperability Platform), or by uploading data directly from user supplied addresses or files. Similarly, scientific software is accessed through registries pointing to software repositories (e.g., GitHub). Runs are configured by using or modifying default templates designed by subject matter experts. After the appropriate computational resources are identified by the user, Virtual Machines (VMs) are spun up and jobs are submitted to service providers (currently the NeCTAR public cloud or Amazon Web Services). Following completion of the jobs the results can be reviewed and downloaded if desired. By providing a unified platform for science, the VL infrastructure enables sophisticated provenance capture and management. The source of input data (including both collection and queries), user information, software information (version and configuration details) and output information are all captured and managed as a VL resource which can be linked to output data sets. This provenance resource provides a mechanism for publication and citation for Free and Open Source Science.
Bio and health informatics meets cloud : BioVLab as an example.
Chae, Heejoon; Jung, Inuk; Lee, Hyungro; Marru, Suresh; Lee, Seong-Whan; Kim, Sun
2013-01-01
The exponential increase of genomic data brought by the advent of the next or the third generation sequencing (NGS) technologies and the dramatic drop in sequencing cost have driven biological and medical sciences to data-driven sciences. This revolutionary paradigm shift comes with challenges in terms of data transfer, storage, computation, and analysis of big bio/medical data. Cloud computing is a service model sharing a pool of configurable resources, which is a suitable workbench to address these challenges. From the medical or biological perspective, providing computing power and storage is the most attractive feature of cloud computing in handling the ever increasing biological data. As data increases in size, many research organizations start to experience the lack of computing power, which becomes a major hurdle in achieving research goals. In this paper, we review the features of publically available bio and health cloud systems in terms of graphical user interface, external data integration, security and extensibility of features. We then discuss about issues and limitations of current cloud systems and conclude with suggestion of a biological cloud environment concept, which can be defined as a total workbench environment assembling computational tools and databases for analyzing bio/medical big data in particular application domains.
NASA Technical Reports Server (NTRS)
Prabhu, Ramadas K.
2001-01-01
This report documents the results of an inviscid computational study conducted on two aeroshell configurations for a proposed '07 Mars Lander. The aeroshell configurations are asymmetric due to the presence of the tabs at the maximum diameter location. The purpose of these tabs was to change the pitching moment characteristics so that the aeroshell will trim at a non-zero angle of attack and produce a lift-to-drag ratio of approximately -0.25. This is required in the guidance of the vehicle on its trajectory. One of the two configurations is called the shelf and the other is called the tab. The unstructured grid software FELISA with the equilibrium Mars gas option was used for these computations. The computations were done for six points on a preliminary trajectory of the '07 Mars Lander at nominal Mach numbers of 2, 3, 5, 10, 15, and 24. Longitudinal aerodynamic characteristics namely lift, drag, and pitching moment were computed for 10, 15, and 20 degrees angles of attack. The results indicated that the two configurations have aerodynamic characteristics that have very similar aerodynamic characteristics, and provide the desired trim LID of approximately -0.25.
Aerodynamic Analyses Requiring Advanced Computers, part 2
NASA Technical Reports Server (NTRS)
1975-01-01
Papers given at the conference present the results of theoretical research on aerodynamic flow problems requiring the use of advanced computers. Topics discussed include two-dimensional configurations, three-dimensional configurations, transonic aircraft, and the space shuttle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greatermore » — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, (5) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
Rattanatamrong, Prapaporn; Matsunaga, Andrea; Raiturkar, Pooja; Mesa, Diego; Zhao, Ming; Mahmoudi, Babak; Digiovanna, Jack; Principe, Jose; Figueiredo, Renato; Sanchez, Justin; Fortes, Jose
2010-01-01
The CyberWorkstation (CW) is an advanced cyber-infrastructure for Brain-Machine Interface (BMI) research. It allows the development, configuration and execution of BMI computational models using high-performance computing resources. The CW's concept is implemented using a software structure in which an "experiment engine" is used to coordinate all software modules needed to capture, communicate and process brain signals and motor-control commands. A generic BMI-model template, which specifies a common interface to the CW's experiment engine, and a common communication protocol enable easy addition, removal or replacement of models without disrupting system operation. This paper reviews the essential components of the CW and shows how templates can facilitate the processes of BMI model development, testing and incorporation into the CW. It also discusses the ongoing work towards making this process infrastructure independent.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.
Thrust Augmentation Study of Cross-Flow Fan for Vertical Take-Off and Landing Aircraft
2012-09-01
configuration by varying the gap between the CFFs. Computational fluid simulations of the dual CFF configuration was performed using ANSYS CFX to find the...Computational fluid simulations of the dual CFF configuration was performed using ANSYS CFX to find the thrust generated as well as the optimal operating point...RECOMMENDATIONS ...............................................................................43 APPENDIX A. ANSYS CFX SETTINGS FOR DUAL CFF (8,000
Martin, G. T.; Yoon, S. S.; Mott, K. E.
1991-01-01
Schistosomiasis, a group of parasitic diseases caused by Schistosoma parasites, is associated with water resources development and affects more than 200 million people in 76 countries. Depending on the species of parasite involved, disease of the liver, spleen, gastrointestinal or urinary tract, or kidneys may result. A computer-assisted teaching package has been developed by WHO for use in the training of public health workers involved in schistosomiasis control. The package consists of the software, ZOOM, and a schistosomiasis information file, Dr Schisto, and uses hypermedia technology to link pictures and text. ZOOM runs on the IBM-PC and IBM-compatible computers, is user-friendly, requires a minimal hardware configuration, and can interact with the user in English, French, Spanish or Portuguese. The information files for ZOOM can be created or modified by the instructor using a word processor, and thus can be designed to suit the need of students. No programming knowledge is required to create the stacks. PMID:1786618
NASA Astrophysics Data System (ADS)
Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.
2017-12-01
In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.
Provisioning cooling elements for chillerless data centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chainer, Timothy J.; Parida, Pritish R.
Systems and methods for cooling include one or more computing structure, an inter-structure liquid cooling system that includes valves configured to selectively provide liquid coolant to the one or more computing structures; a heat rejection system that includes one or more heat rejection units configured to cool liquid coolant; and one or more liquid-to-liquid heat exchangers that include valves configured to selectively transfer heat from liquid coolant in the inter-structure liquid cooling system to liquid coolant in the heat rejection system. Each computing structure further includes one or more liquid-cooled servers; and an intra-structure liquid cooling system that has valvesmore » configured to selectively provide liquid coolant to the one or more liquid-cooled servers.« less
A new security model for collaborative environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Deborah; Lorch, Markus; Thompson, Mary
Prevalent authentication and authorization models for distributed systems provide for the protection of computer systems and resources from unauthorized use. The rules and policies that drive the access decisions in such systems are typically configured up front and require trust establishment before the systems can be used. This approach does not work well for computer software that moderates human-to-human interaction. This work proposes a new model for trust establishment and management in computer systems supporting collaborative work. The model supports the dynamic addition of new users to a collaboration with very little initial trust placed into their identity and supportsmore » the incremental building of trust relationships through endorsements from established collaborators. It also recognizes the strength of a users authentication when making trust decisions. By mimicking the way humans build trust naturally the model can support a wide variety of usage scenarios. Its particular strength lies in the support for ad-hoc and dynamic collaborations and the ubiquitous access to a Computer Supported Collaboration Workspace (CSCW) system from locations with varying levels of trust and security.« less
Architecutres, Models, Algorithms, and Software Tools for Configurable Computing
2000-03-06
and J.G. Nash. The gated interconnection network for dynamic programming. Plenum, 1988 . [18] Ju wook Jang, Heonchul Park, and Viktor K. Prasanna. A ...Sep. 1997. [2] C. Ebeling, D. C. Cronquist , P. Franklin and C. Fisher, "RaPiD - A configurable computing architecture for compute-intensive...ABSTRACT (Maximum 200 words) The Models, Algorithms, and Architectures for Reconfigurable Computing (MAARC) project developed a sound framework for
NASA Technical Reports Server (NTRS)
Kathong, Monchai; Tiwari, Surendra N.
1988-01-01
In the computation of flowfields about complex configurations, it is very difficult to construct a boundary-fitted coordinate system. An alternative approach is to use several grids at once, each of which is generated independently. This procedure is called the multiple grids or zonal grids approach; its applications are investigated. The method conservative providing conservation of fluxes at grid interfaces. The Euler equations are solved numerically on such grids for various configurations. The numerical scheme used is the finite-volume technique with a three-stage Runge-Kutta time integration. The code is vectorized and programmed to run on the CDC VPS-32 computer. Steady state solutions of the Euler equations are presented and discussed. The solutions include: low speed flow over a sphere, high speed flow over a slender body, supersonic flow through a duct, and supersonic internal/external flow interaction for an aircraft configuration at various angles of attack. The results demonstrate that the multiple grids approach along with the conservative interfacing is capable of computing the flows about the complex configurations where the use of a single grid system is not possible.
FPGA-based protein sequence alignment : A review
NASA Astrophysics Data System (ADS)
Isa, Mohd. Nazrin Md.; Muhsen, Ku Noor Dhaniah Ku; Saiful Nurdin, Dayana; Ahmad, Muhammad Imran; Anuar Zainol Murad, Sohiful; Nizam Mohyar, Shaiful; Harun, Azizi; Hussin, Razaidi
2017-11-01
Sequence alignment have been optimized using several techniques in order to accelerate the computation time to obtain the optimal score by implementing DP-based algorithm into hardware such as FPGA-based platform. During hardware implementation, there will be performance challenges such as the frequent memory access and highly data dependent in computation process. Therefore, investigation in processing element (PE) configuration where involves more on memory access in load or access the data (substitution matrix, query sequence character) and the PE configuration time will be the main focus in this paper. There are various approaches to enhance the PE configuration performance that have been done in previous works such as by using serial configuration chain and parallel configuration chain i.e. the configuration data will be loaded into each PEs sequentially and simultaneously respectively. Some researchers have proven that the performance using parallel configuration chain has optimized both the configuration time and area.
Scientific Cluster Deployment and Recovery - Using puppet to simplify cluster management
NASA Astrophysics Data System (ADS)
Hendrix, Val; Benjamin, Doug; Yao, Yushu
2012-12-01
Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment in a cloud environment. Our future cloud efforts will further build on this work.
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
Telescience Resource Kit Software Capabilities and Future Enhancements
NASA Technical Reports Server (NTRS)
Schneider, Michelle
2004-01-01
The Telescience Resource Kit (TReK) is a suite of PC-based software applications that can be used to monitor and control a payload on board the International Space Station (ISS). This software provides a way for payload users to operate their payloads from their home sites. It can be used by an individual or a team of people. TReK provides both local ground support system services and an interface to utilize remote services provided by the Payload Operations Integration Center (POIC). by the POIC and to perform local data functions such as processing the data, storing it in local files, and forwarding it to other computer systems. TReK can also be used to build, send, and track payload commands. In addition to these features, work is in progress to add a new command management capability. This capability will provide a way to manage a multi- platform command environment that can include geographically distributed computers. This is intended to help those teams that need to manage a shared on-board resource such as a facility class payload. The environment can be configured such that one individual can manage all the command activities associated with that payload. This paper will provide a summary of existing TReK capabilities and a description of the new command management capability. For example, 7'ReK can be used to receive payload data distributed
A Comparison of Computed and Experimental Flowfields of the RAH-66 Helicopter
NASA Technical Reports Server (NTRS)
vanDam, C. P.; Budge, A. M.; Duque, E. P. N.
1996-01-01
This paper compares and evaluates numerical and experimental flowfields of the RAH-66 Comanche helicopter. The numerical predictions were obtained by solving the Thin-Layer Navier-Stokes equations. The computations use actuator disks to investigate the main and tail rotor effects upon the fuselage flowfield. The wind tunnel experiment was performed in the 14 x 22 foot facility located at NASA Langley. A suite of flow conditions, rotor thrusts and fuselage-rotor-tail configurations were tested. In addition, the tunnel model and the computational geometry were based upon the same CAD definition. Computations were performed for an isolated fuselage configuration and for a rotor on configuration. Comparisons between the measured and computed surface pressures show areas of correlation and some discrepancies. Local areas of poor computational grid-quality and local areas of geometry differences account for the differences. These calculations demonstrate the use of advanced computational fluid dynamic methodologies towards a flight vehicle currently under development. It serves as an important verification for future computed results.
Verification of Security Policy Enforcement in Enterprise Systems
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Stoller, Scott D.
Many security requirements for enterprise systems can be expressed in a natural way as high-level access control policies. A high-level policy may refer to abstract information resources, independent of where the information is stored; it controls both direct and indirect accesses to the information; it may refer to the context of a request, i.e., the request’s path through the system; and its enforcement point and enforcement mechanism may be unspecified. Enforcement of a high-level policy may depend on the system architecture and the configurations of a variety of security mechanisms, such as firewalls, host login permissions, file permissions, DBMS access control, and application-specific security mechanisms. This paper presents a framework in which all of these can be conveniently and formally expressed, a method to verify that a high-level policy is enforced, and an algorithm to determine a trusted computing base for each resource.
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas
2008-01-01
A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.
NASA Technical Reports Server (NTRS)
Barnwell, R. W.; Davis, R. M.
1975-01-01
A user's manual is presented for a computer program which calculates inviscid flow about lifting configurations in the free-stream Mach-number range from zero to low supersonic. Angles of attack of the order of the configuration thickness-length ratio and less can be calculated. An approximate formulation was used which accounts for shock waves, leading-edge separation and wind-tunnel wall effects.
A computer program for obtaining airplane configuration plots from digital Datcom input data
NASA Technical Reports Server (NTRS)
Roy, M. L.; Sliwa, S. M.
1983-01-01
A computer program is described which reads the input file for the Stability and Control Digital Datcom program and generates plots from the aircraft configuration data. These plots can be used to verify the geometric input data to the Digital Datcom program. The program described interfaces with utilities available for plotting aircraft configurations by creating a file from the Digital Datcom input data.
MetaStorm: A Public Resource for Customizable Metagenomics Annotation
Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S.; Pruden, Amy; Xiao, Weidong; Zhang, Liqing
2016-01-01
Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution. PMID:27632579
MetaStorm: A Public Resource for Customizable Metagenomics Annotation.
Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S; Pruden, Amy; Xiao, Weidong; Zhang, Liqing
2016-01-01
Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution.
NASA Technical Reports Server (NTRS)
Fleming, Gary A. (Technical Monitor); Schwartz, Richard J.
2004-01-01
The desire to revolutionize the aircraft design cycle from its currently lethargic pace to a fast turn-around operation enabling the optimization of non-traditional configurations is a critical challenge facing the aeronautics industry. In response, a large scale effort is underway to not only advance the state of the art in wind tunnel testing, computational modeling, and information technology, but to unify these often disparate elements into a cohesive design resource. This paper will address Seamless Data Transfer, the critical central nervous system that will enable a wide variety of varied components to work together.
Real time gamma-ray signature identifier
Rowland, Mark [Alamo, CA; Gosnell, Tom B [Moraga, CA; Ham, Cheryl [Livermore, CA; Perkins, Dwight [Livermore, CA; Wong, James [Dublin, CA
2012-05-15
A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.
Applications of digital image processing techniques to problems of data registration and correlation
NASA Technical Reports Server (NTRS)
Green, W. B.
1978-01-01
An overview is presented of the evolution of the computer configuration at JPL's Image Processing Laboratory (IPL). The development of techniques for the geometric transformation of digital imagery is discussed and consideration is given to automated and semiautomated image registration, and the registration of imaging and nonimaging data. The increasing complexity of image processing tasks at IPL is illustrated with examples of various applications from the planetary program and earth resources activities. It is noted that the registration of existing geocoded data bases with Landsat imagery will continue to be important if the Landsat data is to be of genuine use to the user community.
Development of an Active Flow Control Technique for an Airplane High-Lift Configuration
NASA Technical Reports Server (NTRS)
Shmilovich, Arvin; Yadlin, Yoram; Dickey, Eric D.; Hartwich, Peter M.; Khodadoust, Abdi
2017-01-01
This study focuses on Active Flow Control methods used in conjunction with airplane high-lift systems. The project is motivated by the simplified high-lift system, which offers enhanced airplane performance compared to conventional high-lift systems. Computational simulations are used to guide the implementation of preferred flow control methods, which require a fluidic supply. It is first demonstrated that flow control applied to a high-lift configuration that consists of simple hinge flaps is capable of attaining the performance of the conventional high-lift counterpart. A set of flow control techniques has been subsequently considered to identify promising candidates, where the central requirement is that the mass flow for actuation has to be within available resources onboard. The flow control methods are based on constant blowing, fluidic oscillators, and traverse actuation. The simulations indicate that the traverse actuation offers a substantial reduction in required mass flow, and it is especially effective when the frequency of actuation is consistent with the characteristic time scale of the flow.
Viscous Design of TCA Configuration
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Bauer, Steven X. S.; Campbell, Richard L.
1999-01-01
The goal in this effort is to redesign the baseline TCA configuration for improved performance at both supersonic and transonic cruise. Viscous analyses are conducted with OVERFLOW, a Navier-Stokes code for overset grids, using PEGSUS to compute the interpolations between overset grids. Viscous designs are conducted with OVERDISC, a script which couples OVERFLOW with the Constrained Direct Iterative Surface Curvature (CDISC) inverse design method. The successful execution of any computational fluid dynamics (CFD) based aerodynamic design method for complex configurations requires an efficient method for regenerating the computational grids to account for modifications to the configuration shape. The first section of this presentation deals with the automated regridding procedure used to generate overset grids for the fuselage/wing/diverter/nacelle configurations analysed in this effort. The second section outlines the procedures utilized to conduct OVERDISC inverse designs. The third section briefly covers the work conducted by Dick Campbell, in which a dual-point design at Mach 2.4 and 0.9 was attempted using OVERDISC; the initial configuration from which this design effort was started is an early version of the optimized shape for the TCA configuration developed by the Boeing Commercial Airplane Group (BCAG), which eventually evolved into the NCV design. The final section presents results from application of the Natural Flow Wing design philosophy to the TCA configuration.
TOSCA calculations and measurements for the SLAC SLC damping ring dipole magnet
NASA Astrophysics Data System (ADS)
Early, R. A.; Cobb, J. K.
1985-04-01
The SLAC damping ring dipole magnet was originally designed with removable nose pieces at the ends. Recently, a set of magnetic measurements was taken of the vertical component of induction along the center of the magnet for four different pole-end configurations and several current settings. The three dimensional computer code TOSCA, which is currently installed on the National Magnetic Fusion Energy Computer Center's Cray X-MP, was used to compute field values for the four configurations at current settings near saturation. Comparisons were made for magnetic induction as well as effective magnetic lengths for the different configurations.
Computational Aeroelastic Analyses of a Low-Boom Supersonic Configuration
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph
2015-01-01
An overview of NASA's Commercial Supersonic Technology (CST) Aeroservoelasticity (ASE) element is provided with a focus on recent computational aeroelastic analyses of a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The overview includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, unstructured CFD grids, and CFD-based aeroelastic analyses. In addition, a summary of the work involving the development of aeroelastic reduced-order models (ROMs) and the development of an aero-propulso-servo-elastic (APSE) model is provided.
Martiniani, Stefano; Schrenk, K Julian; Stevenson, Jacob D; Wales, David J; Frenkel, Daan
2016-01-01
We present a numerical calculation of the total number of disordered jammed configurations Ω of N repulsive, three-dimensional spheres in a fixed volume V. To make these calculations tractable, we increase the computational efficiency of the approach of Xu et al. [Phys. Rev. Lett. 106, 245502 (2011)10.1103/PhysRevLett.106.245502] and Asenjo et al. [Phys. Rev. Lett. 112, 098002 (2014)10.1103/PhysRevLett.112.098002] and we extend the method to allow computation of the configurational entropy as a function of pressure. The approach that we use computes the configurational entropy by sampling the absolute volume of basins of attraction of the stable packings in the potential energy landscape. We find a surprisingly strong correlation between the pressure of a configuration and the volume of its basin of attraction in the potential energy landscape. This relation is well described by a power law. Our methodology to compute the number of minima in the potential energy landscape should be applicable to a wide range of other enumeration problems in statistical physics, string theory, cosmology, and machine learning that aim to find the distribution of the extrema of a scalar cost function that depends on many degrees of freedom.
An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.
Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei
2017-12-01
Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Tatum, Kenneth E.
1991-01-01
Computational results are presented for three issues pertinent to hypersonic, airbreathing vehicles employing scramjet exhaust flow simulation. The first issue consists of a comparison of schlieren photographs obtained on the aftbody of a cruise missile configuration under powered conditions with two-dimensional computational solutions. The second issue presents the powered aftbody effects of modeling the inlet with a fairing to divert the external flow as compared to an operating flow-through inlet on a generic hypersonic vehicle. Finally, a comparison of solutions examining the potential of testing powered configurations in a wind-off, instead of a wind-on, environment, indicate that, depending on the extent of the three-dimensional plume, it may be possible to test aftbody powered hypersonic, airbreathing configurations in a wind-off environment.
Development of stable Grid service at the next generation system of KEKCC
NASA Astrophysics Data System (ADS)
Nakamura, T.; Iwai, G.; Matsunaga, H.; Murakami, K.; Sasaki, T.; Suzuki, S.; Takase, W.
2017-10-01
A lot of experiments in the field of accelerator based science are actively running at High Energy Accelerator Research Organization (KEK) by using SuperKEKB and J-PARC accelerator in Japan. In these days at KEK, the computing demand from the various experiments for the data processing, analysis, and MC simulation is monotonically increasing. It is not only for the case with high-energy experiments, the computing requirement from the hadron and neutrino experiments and some projects of astro-particle physics is also rapidly increasing due to the very high precision measurement. Under this situation, several projects, Belle II, T2K, ILC and KAGRA experiments supported by KEK are going to utilize Grid computing infrastructure as the main computing resource. The Grid system and services in KEK, which is already in production, are upgraded for the further stable operation at the same time of whole scale hardware replacement of KEK Central Computer System (KEKCC). The next generation system of KEKCC starts the operation from the beginning of September 2016. The basic Grid services e.g. BDII, VOMS, LFC, CREAM computing element and StoRM storage element are made by the more robust hardware configuration. Since the raw data transfer is one of the most important tasks for the KEKCC, two redundant GridFTP servers are adapted to the StoRM service instances with 40 Gbps network bandwidth on the LHCONE routing. These are dedicated to the Belle II raw data transfer to the other sites apart from the servers for the data transfer usage of the other VOs. Additionally, we prepare the redundant configuration for the database oriented services like LFC and AMGA by using LifeKeeper. The LFC servers are made by two read/write servers and two read-only servers for the Belle II experiment, and all of them have an individual database for the purpose of load balancing. The FTS3 service is newly deployed as a service for the Belle II data distribution. The service of CVMFS stratum-0 is started for the Belle II software repository, and stratum-1 service is prepared for the other VOs. In this way, there are a lot of upgrade for the real production service of Grid infrastructure at KEK Computing Research Center. In this paper, we would like to introduce the detailed configuration of the hardware for Grid instance, and several mechanisms to construct the robust Grid system in the next generation system of KEKCC.
Internal aerodynamics of a generic three-dimensional scramjet inlet at Mach 10
NASA Technical Reports Server (NTRS)
Holland, Scott D.
1995-01-01
A combined computational and experimental parametric study of the internal aerodynamics of a generic three-dimensional sidewall compression scramjet inlet configuration at Mach 10 has been performed. The study was designed to demonstrate the utility of computational fluid dynamics as a design tool in hypersonic inlet flow fields, to provide a detailed account of the nature and structure of the internal flow interactions, and to provide a comprehensive surface property and flow field database to determine the effects of contraction ratio, cowl position, and Reynolds number on the performance of a hypersonic scramjet inlet configuration. The work proceeded in several phases: the initial inviscid assessment of the internal shock structure, the preliminary computational parametric study, the coupling of the optimized configuration with the physical limitations of the facility, the wind tunnel blockage assessment, and the computational and experimental parametric study of the final configuration. Good agreement between computation and experimentation was observed in the magnitude and location of the interactions, particularly for weakly interacting flow fields. Large-scale forward separations resulted when the interaction strength was increased by increasing the contraction ratio or decreasing the Reynolds number.
Acid/base equilibria in clusters and their role in proton exchange membranes: Computational insight
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glezakou, Vanda A; Dupuis, Michel; Mundy, Christopher J
2007-10-24
We describe molecular orbital theory and ab initio molecular dynamics studies of acid/base equilibria of clusters AH:(H 2O) n↔A -:H +(H 2O) n in low hydration regime (n = 1-4), where AH is a model of perfluorinated sulfonic acids, RSO 3H (R = CF 3CF 2), encountered in polymeric electrolyte membranes of fuel cells. Free energy calculations on the neutral and ion pair structures for n = 3 indicate that the two configurations are close in energy and are accessible in the fluctuation dynamics of proton transport. For n = 1,2 the only relevant configuration is the neutral form. Thismore » was verified through ab initio metadynamics simulations. These findings suggest that bases are directly involved in the proton transport at low hydration levels. In addition, the gas phase proton affinity of the model sulfonic acid RSO 3H was found to be comparable to the proton affinity of water. Thus, protonated acids can also play a role in proton transport under low hydration conditions and under high concentration of protons. This work was supported by the Division of Chemical Science, Office of Basic Energy Sciences, US Department of Energy (DOE under Contract DE-AC05-76RL)1830. Computations were performed on computers of the Molecular Interactions and Transformations (MI&T) group and MSCF facility of EMSL, sponsored by US DOE and OBER located at PNNL. This work was benefited from resource of the National Energy Research Scientific Computing Centre, supported by the Office of Science of the US DOE, under Contract No. DE-AC03-76SF00098.« less
Transonic Flow Field Analysis for Wing-Fuselage Configurations
NASA Technical Reports Server (NTRS)
Boppe, C. W.
1980-01-01
A computational method for simulating the aerodynamics of wing-fuselage configurations at transonic speeds is developed. The finite difference scheme is characterized by a multiple embedded mesh system coupled with a modified or extended small disturbance flow equation. This approach permits a high degree of computational resolution in addition to coordinate system flexibility for treating complex realistic aircraft shapes. To augment the analysis method and permit applications to a wide range of practical engineering design problems, an arbitrary fuselage geometry modeling system is incorporated as well as methodology for computing wing viscous effects. Configuration drag is broken down into its friction, wave, and lift induced components. Typical computed results for isolated bodies, isolated wings, and wing-body combinations are presented. The results are correlated with experimental data. A computer code which employs this methodology is described.
NASA Technical Reports Server (NTRS)
Mann, M. J.; Mercer, C. E.
1986-01-01
A transonic computational analysis method and a transonic design procedure have been used to design the wing and the canard of a forward-swept-wing fighter configuration for good transonic maneuver performance. A model of this configuration was tested in the Langley 16-Foot Transonic Tunnel. Oil-flow photographs were obtained to examine the wind flow patterns at Mach numbers from 0.60 to 0.90. The transonic theory gave a reasonably good estimate of the wing pressure distributions at transonic maneuver conditions. Comparison of the forward-swept-wing configuration with an equivalent aft-swept-wing-configuration showed that, at a Mach number of 0.90 and a lift coefficient of 0.9, the two configurations have the same trimmed drag. The forward-swept wing configuration was also found to have trimmed drag levels at transonic maneuver conditions which are comparable to those of the HiMAT (highly maneuverable aircraft technology) configuration and the X-29 forward-swept-wing research configuration. The configuration of this study was also tested with a forebody strake.
Beamforming strategy of ULA and UCA sensor configuration in multistatic passive radar
NASA Astrophysics Data System (ADS)
Hossa, Robert
2009-06-01
A Beamforming Network (BN) concept of Uniform Linear Array (ULA) and Uniform Circular Array (UCA) dipole configuration designed to multistatic passive radar is considered in details. In the case of UCA configuration, computationally efficient procedure of beamspace transformation from UCA to virtual ULA configuration with omnidirectional coverage is utilized. If effect, the idea of the proposed solution is equivalent to the techniques of antenna array factor shaping dedicated to ULA structure. Finally, exemplary results from the computer software simulations of elaborated spatial filtering solutions to reference and surveillance channels are provided and discussed.
An Embedded Reconfigurable Logic Module
NASA Technical Reports Server (NTRS)
Tucker, Jerry H.; Klenke, Robert H.; Shams, Qamar A. (Technical Monitor)
2002-01-01
A Miniature Embedded Reconfigurable Computer and Logic (MERCAL) module has been developed and verified. MERCAL was designed to be a general-purpose, universal module that that can provide significant hardware and software resources to meet the requirements of many of today's complex embedded applications. This is accomplished in the MERCAL module by combining a sub credit card size PC in a DIMM form factor with a XILINX Spartan I1 FPGA. The PC has the ability to download program files to the FPGA to configure it for different hardware functions and to transfer data to and from the FPGA via the PC's ISA bus during run time. The MERCAL module combines, in a compact package, the computational power of a 133 MHz PC with up to 150,000 gate equivalents of digital logic that can be reconfigured by software. The general architecture and functionality of the MERCAL hardware and system software are described.
NASA Astrophysics Data System (ADS)
Skouteris, D.; Barone, V.
2014-06-01
We report the main features of a new general implementation of the Gaussian Multi-Configuration Time-Dependent Hartree model. The code allows effective computations of time-dependent phenomena, including calculation of vibronic spectra (in one or more electronic states), relative state populations, etc. Moreover, by expressing the Dirac-Frenkel variational principle in terms of an effective Hamiltonian, we are able to provide a new reliable estimate of the representation error. After validating the code on simple one-dimensional systems, we analyze the harmonic and anharmonic vibrational spectra of water and glycine showing that reliable and converged energy levels can be obtained with reasonable computing resources. The data obtained on water and glycine are compared with results of previous calculations using the vibrational second-order perturbation theory method. Additional features and perspectives are also shortly discussed.
CFD analysis of hypersonic, chemically reacting flow fields
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1993-01-01
Design studies are underway for a variety of hypersonic flight vehicles. The National Aero-Space Plane will provide a reusable, single-stage-to-orbit capability for routine access to low earth orbit. Flight-capable satellites will dip into the atmosphere to maneuver to new orbits, while planetary probes will decelerate at their destination by atmospheric aerobraking. To supplement limited experimental capabilities in the hypersonic regime, computational fluid dynamics (CFD) is being used to analyze the flow about these configurations. The governing equations include fluid dynamic as well as chemical species equations, which are being solved with new, robust numerical algorithms. Examples of CFD applications to hypersonic vehicles suggest an important role this technology will play in the development of future aerospace systems. The computational resources needed to obtain solutions are large, but solution adaptive grids, convergence acceleration, and parallel processing may make run times manageable.
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh
2015-01-01
Recently, a novel and computationally efficient method - based on a vector covering approach - to design optimal control places and an iteration approach that computes the reachability graph to obtain a maximally permissive liveness enforcing supervisor for FMS (flexible manufacturing systems) have been reported. However, it is unclear as to the relationship between the structure of the net and the minimal number of monitors required. This paper develops a theory to show that the minimal number of monitors required cannot be less than that of basic siphons in α-S3PR (systems of simple sequential processes with resources). This confirms that two of the three controlled systems by Chen et al. are of a minimal monitor configuration since they belong to α-S3PR and their number in each example equals that of basic siphons.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.; Stiles, J. A.; Frost, F. S.; Shanmugam, K. S.; Smith, S. A.; Narayanan, V.; Holtzman, J. C. (Principal Investigator)
1982-01-01
Computer-generated radar simulations and mathematical geologic terrain models were used to establish the optimum radar sensor operating parameters for geologic research. An initial set of mathematical geologic terrain models was created for three basic landforms and families of simulated radar images were prepared from these models for numerous interacting sensor, platform, and terrain variables. The tradeoffs between the various sensor parameters and the quantity and quality of the extractable geologic data were investigated as well as the development of automated techniques of digital SAR image analysis. Initial work on a texture analysis of SEASAT SAR imagery is reported. Computer-generated radar simulations are shown for combinations of two geologic models and three SAR angles of incidence.
TSARINA: A computer model for assessing conventional and chemical attacks on air bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emerson, D.E.; Wegner, L.H.
This Note describes the latest version of the TSARINA (TSAR INputs using AIDA) airbase damage assessment computer program that has been developed to estimate the on-base concentration of toxic agents that would be deposited by a chemical attack and to assess losses to various on-base resources from conventional attacks, as well as the physical damage to runways, taxiways, buildings, and other facilities. Although the model may be used as a general-purpose, complex-target damage assessment model, its primary role in intended to be in support of the TSAR (Theater Simulation of Airbase Resources) aircraft sortie generation simulation program. When used withmore » TSAR, multiple trials of a multibase airbase-attack campaign can be assessed with TSARINA, and the impact of those attacks on sortie generation can be derived using the TSAR simulation model. TSARINA, as currently configured, permits damage assessments of attacks on an airbase (or other) complex that is compassed of up to 1000 individual targets (buildings, taxiways, etc,), and 2500 packets of resources. TSARINA determines the actual impact points (pattern centroids for CBUs and container burst point for chemical weapons) by Monte Carlo procedures-i.e., by random selections from the appropriate error distributions. Uncertainties in wind velocity and heading are also considered for chemical weapons. Point-impact weapons that impact within a specified distance of each target type are classed as hits, and estimates of the damage to the structures and to the various classes of support resources are assessed using cookie-cutter weapon-effects approximations.« less
Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools
NASA Astrophysics Data System (ADS)
Boe, Bryce A.
There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.
On configurational forces for gradient-enhanced inelasticity
NASA Astrophysics Data System (ADS)
Floros, Dimosthenis; Larsson, Fredrik; Runesson, Kenneth
2018-04-01
In this paper we discuss how configurational forces can be computed in an efficient and robust manner when a constitutive continuum model of gradient-enhanced viscoplasticity is adopted, whereby a suitably tailored mixed variational formulation in terms of displacements and micro-stresses is used. It is demonstrated that such a formulation produces sufficient regularity to overcome numerical difficulties that are notorious for a local constitutive model. In particular, no nodal smoothing of the internal variable fields is required. Moreover, the pathological mesh sensitivity that has been reported in the literature for a standard local model is no longer present. Numerical results in terms of configurational forces are shown for (1) a smooth interface and (2) a discrete edge crack. The corresponding configurational forces are computed for different values of the intrinsic length parameter. It is concluded that the convergence of the computed configurational forces with mesh refinement depends strongly on this parameter value. Moreover, the convergence behavior for the limit situation of rate-independent plasticity is unaffected by the relaxation time parameter.
Research on Spectroscopy, Opacity, and Atmospheres
NASA Technical Reports Server (NTRS)
Kurucz, Robert L.
2005-01-01
I propose to continue providing observers with basic data for interpreting spectra from stars, novas, supernovas, clusters, and galaxies. These data will include allowed and forbidden line lists, both laboratory and computed, for the first five to ten ions of all atoms and for all relevant diatomic molecules. I will eventually expand to all ions of the first thirty elements to treat far UV and X-ray spectra, and for envelope opacities. I also include triatomic molecules provided by other researchers. I have also made CDs with Partridge and Schwenke's water data for work on UV stars. The line data also serve as input to my model atmosphere and synthesis programs that generate energy distributions, photometry, limb darkening, and spectra that can be used for planning observations and for fitting observed spectra. The spectrum synthesis programs produce detailed plots with the lines identified. Grids of stellar spectra can be used for radial velocity-, rotation-, or abundance templates and for population synthesis. I am fitting spectra of bright stars to test the data and to produce atlases to guide observers. For each star the whole spectrum is computed from the UV to the far IR. The line data, opacities, models, spectra, and programs are freely distributed on CDs and on my Web site and represent a unique resource for many NASA programs. I am now in full production of new line lists for atoms. I am computing all ions of all elements from H to Zn and the first 5 ions of all the heavier elements, about 800 ions. For each ion I treat as many as 61 even and 61 odd configurations, computing all energy levels and eigenvectors. The Hamiltonian is determined from a scaled-Hartree-Fock starting guess by least squares fitting the observed energy levels. The average energy of each configuration is used in computing scaled-Thomas-Fermi-Dirac wavefunctions for each configuration which in turn are used to compute allowed and forbidden transition integrals. These are multiplied into the LS allowed and forbidden transition arrays. The transition arrays are transformed to the observed coupling to yield the allowed and forbidden line lists. Results are put on the web as they are finished. Provided I get funding,there will be more than 500 million lines. I will then compare ion by ion, to all the laboratory and computed data in the literature and make up a working line list for spectrum synthesis and opacity calculations with the best available data. As the laboratory spectrum analyses are improved, I will redo the calculations with the new energy levels. My original plan when I started the new calculations was to run through all the atoms using my old Cray programs from the 1980's that were limited to 1100 x 1100 arrays in the Hamiltonian for each J. Then I would go back and rerun the more complicated cases with 3000 x 3000 arrays so that I could include many more configurations and more configuration interactions. At present I am limited to 61 even and 61 odd configurations and I try to include everything up through n = 9. The current program runs on Alpha workstations. I decided to test the big program on Fe I and Fe II to see whether there was any great difference in the low configurations compared to those from the Cray program. Besides increasing the number of E1 lines by a factor of 6 to 7.7 million, there was an unexpected result: the electric quadrupole transitions were 10 times stronger than before because the transition integrals are weighted by r(exp 2) ---they become very large for high n, and because there are numerous configuration interactions that mix the low and high configurations. As a check I was able to reproduce Carstang's (1962) lower results by running his three configurations with my program. Since my model atom is still only a subset of a real Fe II ion, the true quadrupole A values are probably larger than mine. The magnetic dipole lines are affected by the mixing but the overall scale does not change. Because of this scovery I decided that there was no point in computing the small array cases. I have been running with as many configurations as I can and with thousands of parameters in the Hamiltonian. The computer runs take much longer to set up and produce than I had expected. I have concentrated on redoing the low iron group spectra, especially to get data for supernova modelers. I have done only Ca I -- Zn I, Ca II -- Zn II, CU I -- Cu XXIX, Zn I - Zn XXX, for practice at high stages of ionization, C I, C II, S I, and CL I and Ag I for people who were working on the laboratory spectra. Check my web site kurucz.harvard.edu for current additions. My latest calculations have been for carbon I and sulphur I, and silicon I is under way using the same elaborate approach as for C I, which took many months to do. These line lists greatly increase the number of lines in the ultraviolet, in the visible, and especially in the infrared. They will increase the opacity in A, F, and G stars. They will account for many unidentified lines in the sun.
NASA Astrophysics Data System (ADS)
Sangaline, E.; Lauret, J.
2014-06-01
The quantity of information produced in Nuclear and Particle Physics (NPP) experiments necessitates the transmission and storage of data across diverse collections of computing resources. Robust solutions such as XRootD have been used in NPP, but as the usage of cloud resources grows, the difficulties in the dynamic configuration of these systems become a concern. Hadoop File System (HDFS) exists as a possible cloud storage solution with a proven track record in dynamic environments. Though currently not extensively used in NPP, HDFS is an attractive solution offering both elastic storage and rapid deployment. We will present the performance of HDFS in both canonical I/O tests and for a typical data analysis pattern within the RHIC/STAR experimental framework. These tests explore the scaling with different levels of redundancy and numbers of clients. Additionally, the performance of FUSE and NFS interfaces to HDFS were evaluated as a way to allow existing software to function without modification. Unfortunately, the complicated data structures in NPP are non-trivial to integrate with Hadoop and so many of the benefits of the MapReduce paradigm could not be directly realized. Despite this, our results indicate that using HDFS as a distributed filesystem offers reasonable performance and scalability and that it excels in its ease of configuration and deployment in a cloud environment.
Computer program analyzes and designs supersonic wing-body combinations
NASA Technical Reports Server (NTRS)
Woodward, F. A.
1968-01-01
Computer program formulates geometric description of the wing body configuration, optimizes wing camber shape, determines wing shape for a given pressure distribution, and calculates pressures, forces, and moments on a given configuration. The program consists of geometry definition, transformation, and paneling, and aerodynamics, and flow visualization.
The NASA High Speed ASE Project: Computational Analyses of a Low-Boom Supersonic Configuration
NASA Technical Reports Server (NTRS)
Silva, Walter A.; DeLaGarza, Antonio; Zink, Scott; Bounajem, Elias G.; Johnson, Christopher; Buonanno, Michael; Sanetrik, Mark D.; Yoo, Seung Y.; Kopasakis, George; Christhilf, David M.;
2014-01-01
A summary of NASA's High Speed Aeroservoelasticity (ASE) project is provided with a focus on a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The summary includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, structured and unstructured CFD grids, and discussion of the FEM development including sizing and structural constraints applied to the N+2 configuration. Linear results obtained to date include linear mode shapes and linear flutter boundaries. In addition to the tasks associated with the N+2 configuration, a summary of the work involving the development of AeroPropulsoServoElasticity (APSE) models is also discussed.
Caruso, Ronald D
2004-01-01
Proper configuration of software security settings and proper file management are necessary and important elements of safe computer use. Unfortunately, the configuration of software security options is often not user friendly. Safe file management requires the use of several utilities, most of which are already installed on the computer or available as freeware. Among these file operations are setting passwords, defragmentation, deletion, wiping, removal of personal information, and encryption. For example, Digital Imaging and Communications in Medicine medical images need to be anonymized, or "scrubbed," to remove patient identifying information in the header section prior to their use in a public educational or research environment. The choices made with respect to computer security may affect the convenience of the computing process. Ultimately, the degree of inconvenience accepted will depend on the sensitivity of the files and communications to be protected and the tolerance of the user. Copyright RSNA, 2004
Resource constrained design of artificial neural networks using comparator neural network
NASA Technical Reports Server (NTRS)
Wah, Benjamin W.; Karnik, Tanay S.
1992-01-01
We present a systematic design method executed under resource constraints for automating the design of artificial neural networks using the back error propagation algorithm. Our system aims at finding the best possible configuration for solving the given application with proper tradeoff between the training time and the network complexity. The design of such a system is hampered by three related problems. First, there are infinitely many possible network configurations, each may take an exceedingly long time to train; hence, it is impossible to enumerate and train all of them to completion within fixed time, space, and resource constraints. Second, expert knowledge on predicting good network configurations is heuristic in nature and is application dependent, rendering it difficult to characterize fully in the design process. A learning procedure that refines this knowledge based on examples on training neural networks for various applications is, therefore, essential. Third, the objective of the network to be designed is ill-defined, as it is based on a subjective tradeoff between the training time and the network cost. A design process that proposes alternate configurations under different cost-performance tradeoff is important. We have developed a Design System which schedules the available time, divided into quanta, for testing alternative network configurations. Its goal is to select/generate and test alternative network configurations in each quantum, and find the best network when time is expended. Since time is limited, a dynamic schedule that determines the network configuration to be tested in each quantum is developed. The schedule is based on relative comparison of predicted training times of alternative network configurations using comparator network paradigm. The comparator network has been trained to compare training times for a large variety of traces of TSSE-versus-time collected during back-propagation learning of various applications.
Li Manni, Giovanni; Smart, Simon D; Alavi, Ali
2016-03-08
A novel stochastic Complete Active Space Self-Consistent Field (CASSCF) method has been developed and implemented in the Molcas software package. A two-step procedure is used, in which the CAS configuration interaction secular equations are solved stochastically with the Full Configuration Interaction Quantum Monte Carlo (FCIQMC) approach, while orbital rotations are performed using an approximated form of the Super-CI method. This new method does not suffer from the strong combinatorial limitations of standard MCSCF implementations using direct schemes and can handle active spaces well in excess of those accessible to traditional CASSCF approaches. The density matrix formulation of the Super-CI method makes this step independent of the size of the CI expansion, depending exclusively on one- and two-body density matrices with indices restricted to the relatively small number of active orbitals. No sigma vectors need to be stored in memory for the FCIQMC eigensolver--a substantial gain in comparison to implementations using the Davidson method, which require three or more vectors of the size of the CI expansion. Further, no orbital Hessian is computed, circumventing limitations on basis set expansions. Like the parent FCIQMC method, the present technique is scalable on massively parallel architectures. We present in this report the method and its application to the free-base porphyrin, Mg(II) porphyrin, and Fe(II) porphyrin. In the present study, active spaces up to 32 electrons and 29 orbitals in orbital expansions containing up to 916 contracted functions are treated with modest computational resources. Results are quite promising even without accounting for the correlation outside the active space. The systems here presented clearly demonstrate that large CASSCF calculations are possible via FCIQMC-CASSCF without limitations on basis set size.
SPAGHETTILENS: A software stack for modeling gravitational lenses by citizen scientists
NASA Astrophysics Data System (ADS)
Küng, R.
2018-04-01
The 2020s are expected to see tens of thousands of lens discoveries. Mass reconstruction or modeling of these lenses will be needed, but current modeling methods are time intensive for specialists and expert human resources do not scale. SpaghettiLens approaches this challenge with the help of experienced citizen scientist volunteers who have already been involved in finding lenses. A top level description is as follows. Citizen scientists look at data and provide a graphical input based on Fermat's principle which we call a Spaghetti Diagram. This input works as a model configuration. It is followed by the generation of the model, which is a compute intensive task done server side though a task distribution system. Model results are returned in graphical form to the citizen scientist, who examines and then either forwards them for forum discussion or rejects the model and retries. As well as configuring models, citizen scientists can also modify existing model configurations, which results in a version tree of models and makes the modeling process collaborative. SpaghettiLens is designed to be scalable and could be adopted to problems with similar characteristics. It is licensed under the MIT license, released at http://labs.spacewarps.org and the source code is available at https://github.com/RafiKueng/SpaghettiLens.
Budday, Dominik; Leyendecker, Sigrid; van den Bedem, Henry
2015-01-01
Proteins operate and interact with partners by dynamically exchanging between functional substates of a conformational ensemble on a rugged free energy landscape. Understanding how these substates are linked by coordinated, collective motions requires exploring a high-dimensional space, which remains a tremendous challenge. While molecular dynamics simulations can provide atomically detailed insight into the dynamics, computational demands to adequately sample conformational ensembles of large biomolecules and their complexes often require tremendous resources. Kinematic models can provide high-level insights into conformational ensembles and molecular rigidity beyond the reach of molecular dynamics by reducing the dimensionality of the search space. Here, we model a protein as a kinematic linkage and present a new geometric method to characterize molecular rigidity from the constraint manifold Q and its tangent space Q at the current configuration q. In contrast to methods based on combinatorial constraint counting, our method is valid for both generic and non-generic, e.g., singular configurations. Importantly, our geometric approach provides an explicit basis for collective motions along floppy modes, resulting in an efficient procedure to probe conformational space. An atomically detailed structural characterization of coordinated, collective motions would allow us to engineer or allosterically modulate biomolecules by selectively stabilizing conformations that enhance or inhibit function with broad implications for human health. PMID:26213417
NASA Astrophysics Data System (ADS)
Budday, Dominik; Leyendecker, Sigrid; van den Bedem, Henry
2015-10-01
Proteins operate and interact with partners by dynamically exchanging between functional substates of a conformational ensemble on a rugged free energy landscape. Understanding how these substates are linked by coordinated, collective motions requires exploring a high-dimensional space, which remains a tremendous challenge. While molecular dynamics simulations can provide atomically detailed insight into the dynamics, computational demands to adequately sample conformational ensembles of large biomolecules and their complexes often require tremendous resources. Kinematic models can provide high-level insights into conformational ensembles and molecular rigidity beyond the reach of molecular dynamics by reducing the dimensionality of the search space. Here, we model a protein as a kinematic linkage and present a new geometric method to characterize molecular rigidity from the constraint manifold Q and its tangent space Tq Q at the current configuration q. In contrast to methods based on combinatorial constraint counting, our method is valid for both generic and non-generic, e.g., singular configurations. Importantly, our geometric approach provides an explicit basis for collective motions along floppy modes, resulting in an efficient procedure to probe conformational space. An atomically detailed structural characterization of coordinated, collective motions would allow us to engineer or allosterically modulate biomolecules by selectively stabilizing conformations that enhance or inhibit function with broad implications for human health.
Advanced Multiple Processor Configuration Study. Final Report.
ERIC Educational Resources Information Center
Clymer, S. J.
This summary of a study on multiple processor configurations includes the objectives, background, approach, and results of research undertaken to provide the Air Force with a generalized model of computer processor combinations for use in the evaluation of proposed flight training simulator computational designs. An analysis of a real-time flight…
Triple redundant computer system/display and keyboard subsystem interface
NASA Technical Reports Server (NTRS)
Gulde, F. J.
1973-01-01
Interfacing of the redundant display and keyboard subsystem with the triple redundant computer system is defined according to space shuttle design. The study is performed in three phases: (1) TRCS configuration and characteristics identification; (2) display and keyboard subsystem configuration and characteristics identification, and (3) interface approach definition.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION... Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses, with clarifications... Electrical and Electronic Engineers (IEEE) Standard 828-2005, ``IEEE Standard for Software Configuration...
xGDBvm: A Web GUI-Driven Workflow for Annotating Eukaryotic Genomes in the Cloud[OPEN
Merchant, Nirav
2016-01-01
Genome-wide annotation of gene structure requires the integration of numerous computational steps. Currently, annotation is arguably best accomplished through collaboration of bioinformatics and domain experts, with broad community involvement. However, such a collaborative approach is not scalable at today’s pace of sequence generation. To address this problem, we developed the xGDBvm software, which uses an intuitive graphical user interface to access a number of common genome analysis and gene structure tools, preconfigured in a self-contained virtual machine image. Once their virtual machine instance is deployed through iPlant’s Atmosphere cloud services, users access the xGDBvm workflow via a unified Web interface to manage inputs, set program parameters, configure links to high-performance computing (HPC) resources, view and manage output, apply analysis and editing tools, or access contextual help. The xGDBvm workflow will mask the genome, compute spliced alignments from transcript and/or protein inputs (locally or on a remote HPC cluster), predict gene structures and gene structure quality, and display output in a public or private genome browser complete with accessory tools. Problematic gene predictions are flagged and can be reannotated using the integrated yrGATE annotation tool. xGDBvm can also be configured to append or replace existing data or load precomputed data. Multiple genomes can be annotated and displayed, and outputs can be archived for sharing or backup. xGDBvm can be adapted to a variety of use cases including de novo genome annotation, reannotation, comparison of different annotations, and training or teaching. PMID:27020957
xGDBvm: A Web GUI-Driven Workflow for Annotating Eukaryotic Genomes in the Cloud.
Duvick, Jon; Standage, Daniel S; Merchant, Nirav; Brendel, Volker P
2016-04-01
Genome-wide annotation of gene structure requires the integration of numerous computational steps. Currently, annotation is arguably best accomplished through collaboration of bioinformatics and domain experts, with broad community involvement. However, such a collaborative approach is not scalable at today's pace of sequence generation. To address this problem, we developed the xGDBvm software, which uses an intuitive graphical user interface to access a number of common genome analysis and gene structure tools, preconfigured in a self-contained virtual machine image. Once their virtual machine instance is deployed through iPlant's Atmosphere cloud services, users access the xGDBvm workflow via a unified Web interface to manage inputs, set program parameters, configure links to high-performance computing (HPC) resources, view and manage output, apply analysis and editing tools, or access contextual help. The xGDBvm workflow will mask the genome, compute spliced alignments from transcript and/or protein inputs (locally or on a remote HPC cluster), predict gene structures and gene structure quality, and display output in a public or private genome browser complete with accessory tools. Problematic gene predictions are flagged and can be reannotated using the integrated yrGATE annotation tool. xGDBvm can also be configured to append or replace existing data or load precomputed data. Multiple genomes can be annotated and displayed, and outputs can be archived for sharing or backup. xGDBvm can be adapted to a variety of use cases including de novo genome annotation, reannotation, comparison of different annotations, and training or teaching. © 2016 American Society of Plant Biologists. All rights reserved.
Archer, Charles J.; Inglett, Todd A.; Ratterman, Joseph D.; Smith, Brian E.
2010-03-02
Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.
Computer-aided controllability assessment of generic manned Space Station concepts
NASA Technical Reports Server (NTRS)
Ferebee, M. J.; Deryder, L. J.; Heck, M. L.
1984-01-01
NASA's Concept Development Group assessment methodology for the on-orbit rigid body controllability characteristics of each generic configuration proposed for the manned space station is presented; the preliminary results obtained represent the first step in the analysis of these eight configurations. Analytical computer models of each configuration were developed by means of the Interactive Design Evaluation of Advanced Spacecraft CAD system, which created three-dimensional geometry models of each configuration to establish dimensional requirements for module connectivity, payload accommodation, and Space Shuttle berthing; mass, center-of-gravity, inertia, and aerodynamic drag areas were then derived. Attention was also given to the preferred flight attitude of each station concept.
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor); Bearman, Gregory H. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTISs") employing a single lens are provided. The CTISs may be either transmissive or reflective, and the single lens is either configured to transmit and receive uncollimated light (in transmissive systems), or is configured to reflect and receive uncollimated light (in reflective systems). An exemplary transmissive CTIS includes a focal plane array detector, a single lens configured to transmit and receive uncollimated light, a two-dimensional grating, and a field stop aperture. An exemplary reflective CTIS includes a focal plane array detector, a single mirror configured to reflect and receive uncollimated light, a two-dimensional grating, and a field stop aperture.
Buttles, John W [Idaho Falls, ID
2011-12-20
Wireless communication devices include a software-defined radio coupled to processing circuitry. The processing circuitry is configured to execute computer programming code. Storage media is coupled to the processing circuitry and includes computer programming code configured to cause the processing circuitry to configure and reconfigure the software-defined radio to operate on each of a plurality of communication networks according to a selected sequence. Methods for communicating with a wireless device and methods of wireless network-hopping are also disclosed.
Buttles, John W
2013-04-23
Wireless communication devices include a software-defined radio coupled to processing circuitry. The system controller is configured to execute computer programming code. Storage media is coupled to the system controller and includes computer programming code configured to cause the system controller to configure and reconfigure the software-defined radio to operate on each of a plurality of communication networks according to a selected sequence. Methods for communicating with a wireless device and methods of wireless network-hopping are also disclosed.
CFD validation experiments at the Lockheed-Georgia Company
NASA Technical Reports Server (NTRS)
Malone, John B.; Thomas, Andrew S. W.
1987-01-01
Information is given in viewgraph form on computational fluid dynamics (CFD) validation experiments at the Lockheed-Georgia Company. Topics covered include validation experiments on a generic fighter configuration, a transport configuration, and a generic hypersonic vehicle configuration; computational procedures; surface and pressure measurements on wings; laser velocimeter measurements of a multi-element airfoil system; the flowfield around a stiffened airfoil; laser velocimeter surveys of a circulation control wing; circulation control for high lift; and high angle of attack aerodynamic evaluations.
Cellular computational platform and neurally inspired elements thereof
Okandan, Murat
2016-11-22
A cellular computational platform is disclosed that includes a multiplicity of functionally identical, repeating computational hardware units that are interconnected electrically and optically. Each computational hardware unit includes a reprogrammable local memory and has interconnections to other such units that have reconfigurable weights. Each computational hardware unit is configured to transmit signals into the network for broadcast in a protocol-less manner to other such units in the network, and to respond to protocol-less broadcast messages that it receives from the network. Each computational hardware unit is further configured to reprogram the local memory in response to incoming electrical and/or optical signals.
Sensor Data Qualification System (SDQS) Implementation Study
NASA Technical Reports Server (NTRS)
Wong, Edmond; Melcher, Kevin; Fulton, Christopher; Maul, William
2009-01-01
The Sensor Data Qualification System (SDQS) is being developed to provide a sensor fault detection capability for NASA s next-generation launch vehicles. In addition to traditional data qualification techniques (such as limit checks, rate-of-change checks and hardware redundancy checks), SDQS can provide augmented capability through additional techniques that exploit analytical redundancy relationships to enable faster and more sensitive sensor fault detection. This paper documents the results of a study that was conducted to determine the best approach for implementing a SDQS network configuration that spans multiple subsystems, similar to those that may be implemented on future vehicles. The best approach is defined as one that most minimizes computational resource requirements without impacting the detection of sensor failures.
Ground Collision Avoidance System (Igcas)
NASA Technical Reports Server (NTRS)
Prosser, Kevin (Inventor); Hook, Loyd (Inventor); Skoog, Mark A (Inventor)
2017-01-01
The present invention is a system and method for aircraft ground collision avoidance (iGCAS) comprising a modular array of software, including a sense own state module configured to gather data to compute trajectory, a sense terrain module including a digital terrain map (DTM) and map manger routine to store and retrieve terrain elevations, a predict collision threat module configured to generate an elevation profile corresponding to the terrain under the trajectory computed by said sense own state module, a predict avoidance trajectory module configured to simulate avoidance maneuvers ahead of the aircraft, a determine need to avoid module configured to determine which avoidance maneuver should be used, when it should be initiated, and when it should be terminated, a notify Module configured to display each maneuver's viability to the pilot by a colored GUI, a pilot controls module configured to turn the system on and off, and an avoid module configured to define how an aircraft will perform avoidance maneuvers through 3-dimensional space.
A blueprint for computational analysis of acoustical scattering from orchestral panel arrays
NASA Astrophysics Data System (ADS)
Burns, Thomas
2005-09-01
Orchestral panel arrays have been a topic of interest to acousticians, and it is reasonable to expect optimal design criteria to result from a combination of musician surveys, on-stage empirical data, and computational modeling of various configurations. Preparing a musicians survey to identify specific mechanisms of perception and sound quality is best suited for a clinically experienced hearing scientist. Measuring acoustical scattering from a panel array and discerning the effects from various boundaries is best suited for the experienced researcher in engineering acoustics. Analyzing a numerical model of the panel arrays is best suited for the tools typically used in computational engineering analysis. Toward this end, a streamlined process will be described using PROENGINEER to define a panel array geometry in 3-D, a commercial mesher to numerically discretize this geometry, SYSNOISE to solve the associated boundary element integral equations, and MATLAB to visualize the results. The model was run (background priority) on an SGI Altix (Linux) server with 12 CPUs, 24 Gbytes of RAM, and 1 Tbyte of disk space. These computational resources are available to research teams interested in this topic and willing to write and pursue grants.
Framework for architecture-independent run-time reconfigurable applications
NASA Astrophysics Data System (ADS)
Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.
2000-10-01
Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.
Recent Performance Results of VPIC on Trinity
NASA Astrophysics Data System (ADS)
Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Le, A.; Li, H.; Nam, H.; Pang, X.; Stark, D. J.; Rust, W. N., III; Yin, L.; Albright, B. J.
2017-10-01
Trinity is a new DOE compute resource now in production at Los Alamos National Laboratory. Trinity has several new and unique features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes, use of on package high bandwidth memory (HBM) for KNL nodes, ability to configure KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to optimize VPIC on Trinity by taking advantage of these new architectural features. Results of work will be presented on performance of VPIC on Haswell and KNL partitions for single node runs and runs at scale. Results include use of burst buffers at scale to optimize I/O, comparison of strategies for using MPI and threads, performance benefits using HBM and effectiveness of using intrinsics for vectorization. Work performed under auspices of U.S. Dept. of Energy by Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by LANL LDRD program.
Stiltner, G.J.
1990-01-01
In 1987, the Water Resources Division of the U.S. Geological Survey undertook three pilot projects to evaluate electronic report processing systems as a means to improve the quality and timeliness of reports pertaining to water resources investigations. The three projects selected for study included the use of the following configuration of software and hardware: Ventura Publisher software on an IBM model AT personal computer, PageMaker software on a Macintosh computer, and FrameMaker software on a Sun Microsystems workstation. The following assessment criteria were to be addressed in the pilot studies: The combined use of text, tables, and graphics; analysis of time; ease of learning; compatibility with the existing minicomputer system; and technical limitations. It was considered essential that the camera-ready copy produced be in a format suitable for publication. Visual improvement alone was not a consideration. This report consolidates and summarizes the findings of the electronic report processing pilot projects. Text and table files originating on the existing minicomputer system were successfully transformed to the electronic report processing systems in American Standard Code for Information Interchange (ASCII) format. Graphics prepared using a proprietary graphics software package were transferred to all the electronic report processing software through the use of Computer Graphic Metafiles. Graphics from other sources were entered into the systems by scanning paper images. Comparative analysis of time needed to process text and tables by the electronic report processing systems and by conventional methods indicated that, although more time is invested in creating the original page composition for an electronically processed report , substantial time is saved in producing subsequent reports because the format can be stored and re-used by electronic means as a template. Because of the more compact page layouts, costs of printing the reports were 15% to 25% less than costs of printing the reports prepared by conventional methods. Because the largest report workload in the offices conducting water resources investigations is preparation of Water-Resources Investigations Reports, Open-File Reports, and annual State Data Reports, the pilot studies only involved these projects. (USGS)
Some system considerations in configuring a digital flight control - navigation system
NASA Technical Reports Server (NTRS)
Boone, J. H.; Flynn, G. R.
1976-01-01
A trade study was conducted with the objective of providing a technical guideline for selection of the most appropriate computer technology for the automatic flight control system of a civil subsonic jet transport. The trade study considers aspects of using either an analog, incremental type special purpose computer or a general purpose computer to perform critical autopilot computation functions. It also considers aspects of integration of noncritical autopilot and autothrottle modes into the computer performing the critical autoland functions, as compared to the federation of the noncritical modes into either a separate computer or with a R-Nav computer. The study is accomplished by establishing the relative advantages and/or risks associated with each of the computer configurations.
Shek, Tina L T; Tse, Leonard W; Nabovati, Aydin; Amon, Cristina H
2012-12-01
The technique of crossing the limbs of bifurcated modular stent grafts for endovascular aneurysm repair (EVAR) is often employed in the face of splayed aortic bifurcations to facilitate cannulation and prevent device kinking. However, little has been reported about the implications of cross-limb EVAR, especially in comparison to conventional EVAR. Previous computational fluid dynamics studies of conventional EVAR grafts have mostly utilized simplified planar stent graft geometries. We herein examined the differences between conventional and cross-limb EVAR by comparing their hemodynamic flow fields (i.e., in the "direct" and "cross" configurations, respectively). We also added a "planar" configuration, which is commonly found in the literature, to identify how well this configuration compares to out-of-plane stent graft configurations from a hemodynamic perspective. A representative patient's cross-limb stent graft geometry was segmented using computed tomography imaging in Mimics software. The cross-limb graft geometry was used to build its direct and planar counterparts in SolidWorks. Physiologic velocity and mass flow boundary conditions and blood properties were implemented for steady-state and pulsatile transient simulations in ANSYS CFX. Displacement forces, wall shear stress (WSS), and oscillatory shear index (OSI) were all comparable between the direct and cross configurations, whereas the planar geometry yielded very different predictions of hemodynamics compared to the out-of-plane stent graft configurations, particularly for displacement forces. This single-patient study suggests that the short-term hemodynamics involved in crossing the limbs is as safe as conventional EVAR. Higher helicity and improved WSS distribution of the cross-limb configuration suggest improved flow-related thrombosis resistance in the short term. However, there may be long-term fatigue implications to stent graft use in the cross configuration when compared to the direct configuration.
Application of Fast Multipole Methods to the NASA Fast Scattering Code
NASA Technical Reports Server (NTRS)
Dunn, Mark H.; Tinetti, Ana F.
2008-01-01
The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.
NASA Technical Reports Server (NTRS)
Flourens, F.; Morel, T.; Gauthier, D.; Serafin, D.
1991-01-01
Numerical techniques such as Finite Difference Time Domain (FDTD) computer programs, which were first developed to analyze the external electromagnetic environment of an aircraft during a wave illumination, a lightning event, or any kind of current injection, are now very powerful investigative tools. The program called GORFF-VE, was extended to compute the inner electromagnetic fields that are generated by the penetration of the outer fields through large apertures made in the all metallic body. Then, the internal fields can drive the electrical response of a cable network. The coupling between the inside and the outside of the helicopter is implemented using Huygen's principle. Moreover, the spectacular increase of computer resources, as calculations speed and memory capacity, allows the modellization structures as complex as these of helicopters with accuracy. This numerical model was exploited, first, to analyze the electromagnetic environment of an in-flight helicopter for several injection configurations, and second, to design a coaxial return path to simulate the lightning aircraft interaction with a strong current injection. The E field and current mappings are the result of these calculations.
SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.
Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi
2018-01-01
Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
1982-10-01
spent in preparing this document. 00. EXECUTIVE SUMMARY The O’Hare Runway Configuration Management System (CMS) is an interactive multi-user computer ...MITRE Washington’s Computer Center. Currently, CMS is housed in an IBM 4341 computer with VM/SP operating system. CMS employs the IBM’s Display...iV 0O, o 0 .r4L /~ wA 0U 00 00 0 w vi O’Hare, it will operate on a dedicated mini- computer which permits multi-tasking (that is, multiple users
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.
2014-01-01
The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.
Three Dimensional Aerodynamic Analysis of a High-Lift Transport Configuration
NASA Technical Reports Server (NTRS)
Dodbele, Simha S.
1993-01-01
Two computational methods, a surface panel method and an Euler method employing unstructured grid methodology, were used to analyze a subsonic transport aircraft in cruise and high-lift conditions. The computational results were compared with two separate sets of flight data obtained for the cruise and high-lift configurations. For the cruise configuration, the surface pressures obtained by the panel method and the Euler method agreed fairly well with results from flight test. However, for the high-lift configuration considerable differences were observed when the computational surface pressures were compared with the results from high-lift flight test. On the lower surface of all the elements with the exception of the slat, both the panel and Euler methods predicted pressures which were in good agreement with flight data. On the upper surface of all the elements the panel method predicted slightly higher suction compared to the Euler method. On the upper surface of the slat, pressure coefficients obtained by both the Euler and panel methods did not agree with the results of the flight tests. A sensitivity study of the upward deflection of the slat from the 40 deg. flap setting suggested that the differences in the slat deflection between the computational model and the flight configuration could be one of the sources of this discrepancy. The computation time for the implicit version of the Euler code was about 1/3 the time taken by the explicit version though the implicit code required 3 times the memory taken by the explicit version.
Numerical study of hydrogen-air supersonic combustion by using elliptic and parabolized equations
NASA Technical Reports Server (NTRS)
Chitsomboon, T.; Tiwari, S. N.
1986-01-01
The two-dimensional Navier-Stokes and species continuity equations are used to investigate supersonic chemically reacting flow problems which are related to scramjet-engine configurations. A global two-step finite-rate chemistry model is employed to represent the hydrogen-air combustion in the flow. An algebraic turbulent model is adopted for turbulent flow calculations. The explicit unsplit MacCormack finite-difference algorithm is used to develop a computer program suitable for a vector processing computer. The computer program developed is then used to integrate the system of the governing equations in time until convergence is attained. The chemistry source terms in the species continuity equations are evaluated implicitly to alleviate stiffness associated with fast chemical reactions. The problems solved by the elliptic code are re-investigated by using a set of two-dimensional parabolized Navier-Stokes and species equations. A linearized fully-coupled fully-implicit finite difference algorithm is used to develop a second computer code which solves the governing equations by marching in spce rather than time, resulting in a considerable saving in computer resources. Results obtained by using the parabolized formulation are compared with the results obtained by using the fully-elliptic equations. The comparisons indicate fairly good agreement of the results of the two formulations.
Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte; ...
2017-05-17
This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte
This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less
Ognibene, Ted; Bench, Graham; McCartt, Alan Daniel; Turteltaub, Kenneth; Rella, Chris W.; Tan, Sze; Hoffnagle, John A.; Crosson, Eric
2017-05-09
Optical spectrometer apparatus, systems, and methods for analysis of carbon-14 including a resonant optical cavity configured to accept a sample gas including carbon-14, an optical source configured to deliver optical radiation to the resonant optical cavity, an optical detector configured to detect optical radiation emitted from the resonant cavity and to provide a detector signal; and a processor configured to compute a carbon-14 concentration from the detector signal, wherein computing the carbon-14 concentration from the detector signal includes fitting a spectroscopic model to a measured spectrogram, wherein the spectroscopic model accounts for contributions from one or more interfering species that spectroscopically interfere with carbon-14.
WRF4SG: A Scientific Gateway for climate experiment workflows
NASA Astrophysics Data System (ADS)
Blanco, Carlos; Cofino, Antonio S.; Fernandez-Quiruelas, Valvanuz
2013-04-01
The Weather Research and Forecasting model (WRF) is a community-driven and public domain model widely used by the weather and climate communities. As opposite to other application-oriented models, WRF provides a flexible and computationally-efficient framework which allows solving a variety of problems for different time-scales, from weather forecast to climate change projection. Furthermore, WRF is also widely used as a research tool in modeling physics, dynamics, and data assimilation by the research community. Climate experiment workflows based on Weather Research and Forecasting (WRF) are nowadays among the one of the most cutting-edge applications. These workflows are complex due to both large storage and the huge number of simulations executed. In order to manage that, we have developed a scientific gateway (SG) called WRF for Scientific Gateway (WRF4SG) based on WS-PGRADE/gUSE and WRF4G frameworks to ease achieve WRF users needs (see [1] and [2]). WRF4SG provides services for different use cases that describe the different interactions between WRF users and the WRF4SG interface in order to show how to run a climate experiment. As WS-PGRADE/gUSE uses portlets (see [1]) to interact with users, its portlets will support these use cases. A typical experiment to be carried on by a WRF user will consist on a high-resolution regional re-forecast. These re-forecasts are common experiments used as input data form wind power energy and natural hazards (wind and precipitation fields). In the cases below, the user is able to access to different resources such as Grid due to the fact that WRF needs a huge amount of computing resources in order to generate useful simulations: * Resource configuration and user authentication: The first step is to authenticate on users' Grid resources by virtual organizations. After login, the user is able to select which virtual organization is going to be used by the experiment. * Data assimilation: In order to assimilate the data sources, the user has to select them browsing through LFC Portlet. * Design Experiment workflow: In order to configure the experiment, the user will define the type of experiment (i.e. re-forecast), and its attributes to simulate. In this case the main attributes are: the field of interest (wind, precipitation, ...), the start and end date simulation and the requirements of the experiment. * Monitor workflow: In order to monitor the experiment the user will receive notification messages based on events and also the gateway will display the progress of the experiment. * Data storage: Like Data assimilation case, the user is able to browse and view the output data simulations using LFC Portlet. The objectives of WRF4SG can be described by considering two goals. The first goal is to show how WRF4SG facilitates to execute, monitor and manage climate workflows based on the WRF4G framework. And the second goal of WRF4SG is to help WRF users to execute their experiment workflows concurrently using heterogeneous computing resources such as HPC and Grid. [1] Kacsuk, P.: P-GRADE portal family for grid infrastructures. Concurrency and Computation: Practice and Experience. 23, 235-245 (2011). [2] http://www.meteo.unican.es/software/wrf4g
Educational Technology: Best Practices from America's Schools.
ERIC Educational Resources Information Center
Bozeman, William C.; Baumbach, Donna J.
This book begins with an overview of computer technology concepts, including computer system configurations, computer communications, and software. Instructional computer applications are then discussed; topics include computer-assisted instruction, computer-managed instruction, computer-enhanced instruction, LOGO, authoring programs, presentation…
ASCR/HEP Exascale Requirements Review Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; Gerber, Richard
This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
ASCR/HEP Exascale Requirements Review Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; et al.
2016-03-30
This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, tomore » store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.« less
Aerodynamic analysis for aircraft with nacelles, pylons, and winglets at transonic speeds
NASA Technical Reports Server (NTRS)
Boppe, Charles W.
1987-01-01
A computational method has been developed to provide an analysis for complex realistic aircraft configurations at transonic speeds. Wing-fuselage configurations with various combinations of pods, pylons, nacelles, and winglets can be analyzed along with simpler shapes such as airfoils, isolated wings, and isolated bodies. The flexibility required for the treatment of such diverse geometries is obtained by using a multiple nested grid approach in the finite-difference relaxation scheme. Aircraft components (and their grid systems) can be added or removed as required. As a result, the computational method can be used in the same manner as a wind tunnel to study high-speed aerodynamic interference effects. The multiple grid approach also provides high boundary point density/cost ratio. High resolution pressure distributions can be obtained. Computed results are correlated with wind tunnel and flight data using four different transport configurations. Experimental/computational component interference effects are included for cases where data are available. The computer code used for these comparisons is described in the appendices.
Touch-screen tablet user configurations and case-supported tilt affect head and neck flexion angles.
Young, Justin G; Trudeau, Matthieu; Odell, Dan; Marinelli, Kim; Dennerlein, Jack T
2012-01-01
The aim of this study was to determine how head and neck postures vary when using two media tablet (slate) computers in four common user configurations. Fifteen experienced media tablet users completed a set of simulated tasks with two media tablets in four typical user configurations. The four configurations were: on the lap and held with the user's hands, on the lap and in a case, on a table and in a case, and on a table and in a case set at a high angle for watching movies. An infra-red LED marker based motion analysis system measured head/neck postures. Head and neck flexion significantly varied across the four configurations and across the two tablets tested. Head and neck flexion angles during tablet use were greater, in general, than angles previously reported for desktop and notebook computing. Postural differences between tablets were driven by case designs, which provided significantly different tilt angles, while postural differences between configurations were driven by gaze and viewing angles. Head and neck posture during tablet computing can be improved by placing the tablet higher to avoid low gaze angles (i.e. on a table rather than on the lap) and through the use of a case that provides optimal viewing angles.
ERIC Educational Resources Information Center
Towndrow, Phillip A.; Fareed, Wan
2015-01-01
This article illustrates how findings from a study of teachers' and students' uses of laptop computers in a secondary school in Singapore informed the development of an Innovation Configuration (IC) Map--a tool for identifying and describing alternative ways of implementing innovations based on teachers' unique feelings, preoccupations, thoughts…
ERIC Educational Resources Information Center
Conkright, Thomas D.; Joliat, Judy
1996-01-01
Discusses the challenges, solutions, and compromises involved in creating computer-delivered training courseware for Apollo Travel Services, a company whose 50,000 agents must access a mainframe from many different computing configurations. Initial difficulties came in trying to manage random access memory and quicken response time, but the future…
Intelligent redundant actuation system requirements and preliminary system design
NASA Technical Reports Server (NTRS)
Defeo, P.; Geiger, L. J.; Harris, J.
1985-01-01
Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.
NASA Technical Reports Server (NTRS)
Ghaffari, Farhad
1999-01-01
Unstructured grid Euler computations, performed at supersonic cruise speed, are presented for a High Speed Civil Transport (HSCT) configuration, designated as the Technology Concept Airplane (TCA) within the High Speed Research (HSR) Program. The numerical results are obtained for the complete TCA cruise configuration which includes the wing, fuselage, empennage, diverters, and flow through nacelles at M (sub infinity) = 2.4 for a range of angles-of-attack and sideslip. Although all the present computations are performed for the complete TCA configuration, appropriate assumptions derived from the fundamental supersonic aerodynamic principles have been made to extract aerodynamic predictions to complement the experimental data obtained from a 1.675%-scaled truncated (aft fuselage/empennage components removed) TCA model. The validity of the computational results, derived from the latter assumptions, are thoroughly addressed and discussed in detail. The computed surface and off-surface flow characteristics are analyzed and the pressure coefficient contours on the wing lower surface are shown to correlate reasonably well with the available pressure sensitive paint results, particularly, for the complex flow structures around the nacelles. The predicted longitudinal and lateral/directional performance characteristics for the truncated TCA configuration are shown to correlate very well with the corresponding wind-tunnel data across the examined range of angles-of-attack and sideslip. The complementary computational results for the longitudinal and lateral/directional performance characteristics for the complete TCA configuration are also presented along with the aerodynamic effects due to empennage components. Results are also presented to assess the computational method performance, solution sensitivity to grid refinement, and solution convergence characteristics.
Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.
2008-01-01
This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.
Computational analysis of semi-span model test techniques
NASA Technical Reports Server (NTRS)
Milholen, William E., II; Chokani, Ndaona
1996-01-01
A computational investigation was conducted to support the development of a semi-span model test capability in the NASA LaRC's National Transonic Facility. This capability is required for the testing of high-lift systems at flight Reynolds numbers. A three-dimensional Navier-Stokes solver was used to compute the low-speed flow over both a full-span configuration and a semi-span configuration. The computational results were found to be in good agreement with the experimental data. The computational results indicate that the stand-off height has a strong influence on the flow over a semi-span model. The semi-span model adequately replicates the aerodynamic characteristics of the full-span configuration when a small stand-off height, approximately twice the tunnel empty sidewall boundary layer displacement thickness, is used. Several active sidewall boundary layer control techniques were examined including: upstream blowing, local jet blowing, and sidewall suction. Both upstream tangential blowing, and sidewall suction were found to minimize the separation of the sidewall boundary layer ahead of the semi-span model. The required mass flow rates are found to be practicable for testing in the NTF. For the configuration examined, the active sidewall boundary layer control techniques were found to be necessary only near the maximum lift conditions.
Kim, Ki-Wook; Han, Youn-Hee; Min, Sung-Gi
2017-09-21
Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism.
Han, Youn-Hee; Min, Sung-Gi
2017-01-01
Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism. PMID:28934152
Theoretical Framework for Integrating Distributed Energy Resources into Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Jianming; Wu, Di; Kalsi, Karanjit
This paper focuses on developing a novel theoretical framework for effective coordination and control of a large number of distributed energy resources in distribution systems in order to more reliably manage the future U.S. electric power grid under the high penetration of renewable generation. The proposed framework provides a systematic view of the overall structure of the future distribution systems along with the underlying information flow, functional organization, and operational procedures. It is characterized by the features of being open, flexible and interoperable with the potential to support dynamic system configuration. Under the proposed framework, the energy consumption of variousmore » DERs is coordinated and controlled in a hierarchical way by using market-based approaches. The real-time voltage control is simultaneously considered to complement the real power control in order to keep nodal voltages stable within acceptable ranges during real time. In addition, computational challenges associated with the proposed framework are also discussed with recommended practices.« less
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C.
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. PMID:24904400
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research.
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction.
Kim, Jeong Chul; Cruz, Dinna; Garzotto, Francesco; Kaushik, Manish; Teixeria, Catarina; Baldwin, Marie; Baldwin, Ian; Nalesso, Federico; Kim, Ji Hyun; Kang, Eungtaek; Kim, Hee Chan; Ronco, Claudio
2013-01-01
Continuous renal replacement therapy (CRRT) is commonly used for critically ill patients with acute kidney injury. During treatment, a slow dialysate flow rate can be applied to enhance diffusive solute removal. However, due to the lack of the rationale of the dialysate flow configuration (countercurrent or concurrent to blood flow), in clinical practice, the connection settings of a hemodiafilter are done depending on nurse preference or at random. In this study, we investigated the effects of flow configurations in a hemodiafilter during continuous venovenous hemodialysis on solute removal and fluid transport using computational fluid dynamic modeling. We solved the momentum equation coupling solute transport to predict quantitative diffusion and convection phenomena in a simplified hemodiafilter model. Computational modeling results showed superior solute removal (clearance of urea: 67.8 vs. 45.1 ml/min) and convection (filtration volume: 29.0 vs. 25.7 ml/min) performances for the countercurrent flow configuration. Countercurrent flow configuration enhances convection and diffusion compared to concurrent flow configuration by increasing filtration volume and equilibrium concentration in the proximal part of a hemodiafilter and backfiltration of pure dialysate in the distal part. In clinical practice, the countercurrent dialysate flow configuration of a hemodiafilter could increase solute removal in CRRT. Nevertheless, while this configuration may become mandatory for high-efficiency treatments, the impact of differences in solute removal observed in slow continuous therapies may be less important. Under these circumstances, if continuous therapies are prescribed, some of the advantages of the concurrent configuration in terms of simpler circuit layout and simpler machine design may overcome the advantages in terms of solute clearance. Different dialysate flow configurations influence solute clearance and change major solute removal mechanisms in the proximal and distal parts of a hemodiafilter. Advantages of each configuration should be balanced against the overall performance of the treatment and its simplicity in terms of treatment delivery and circuit handling procedures. Copyright © 2013 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
Guruswamy, Guru
2004-01-01
A procedure to accurately generate AIC using the Navier-Stokes solver including grid deformation is presented. Preliminary results show good comparisons between experiment and computed flutter boundaries for a rectangular wing. A full wing body configuration of an orbital space plane is selected for demonstration on a large number of processors. In the final paper the AIC of full wing body configuration will be computed. The scalability of the procedure on supercomputer will be demonstrated.
Prediction of overall and blade-element performance for axial-flow pump configurations
NASA Technical Reports Server (NTRS)
Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.
1973-01-01
A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph; Kopasakis, George
2016-01-01
An overview of recent applications of the FUN3D CFD code to computational aeroelastic, sonic boom, and aeropropulsoservoelasticity (APSE) analyses of a low-boom supersonic configuration is presented. The overview includes details of the computational models developed including multiple unstructured CFD grids suitable for aeroelastic and sonic boom analyses. In addition, aeroelastic Reduced-Order Models (ROMs) are generated and used to rapidly compute the aeroelastic response and utter boundaries at multiple flight conditions.
NASA Technical Reports Server (NTRS)
Parikh, Paresh; Engelund, Walter; Armand, Sasan; Bittner, Robert
2004-01-01
A computational fluid dynamic (CFD) study is performed on the Hyper-X (X-43A) Launch Vehicle stack configuration in support of the aerodynamic database generation in the transonic to hypersonic flow regime. The main aim of the study is the evaluation of a CFD method that can be used to support aerodynamic database development for similar future configurations. The CFD method uses the NASA Langley Research Center developed TetrUSS software, which is based on tetrahedral, unstructured grids. The Navier-Stokes computational method is first evaluated against a set of wind tunnel test data to gain confidence in the code s application to hypersonic Mach number flows. The evaluation includes comparison of the longitudinal stability derivatives on the complete stack configuration (which includes the X-43A/Hyper-X Research Vehicle, the launch vehicle and an adapter connecting the two), detailed surface pressure distributions at selected locations on the stack body and component (rudder, elevons) forces and moments. The CFD method is further used to predict the stack aerodynamic performance at flow conditions where no experimental data is available as well as for component loads for mechanical design and aero-elastic analyses. An excellent match between the computed and the test data over a range of flow conditions provides a computational tool that may be used for future similar hypersonic configurations with confidence.
Controlling user access to electronic resources without password
Smith, Fred Hewitt
2015-06-16
Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.
NASA Technical Reports Server (NTRS)
Mcardle, J. G.; Homyak, L.; Moore, A. S.
1979-01-01
The performance of a YF-102 turbofan engine was measured in an outdoor test stand with a bellmouth inlet and seven exhaust-system configurations. The configurations consisted of three separate-flow systems of various fan and core nozzle sizes and four confluent-flow systems of various nozzle sizes and shapes. A computer program provided good estimates of the engine performance and of thrust at maximum rating for each exhaust configuration. The internal performance of two different-shaped core nozzles for confluent-flow configurations was determined to be satisfactory. Pressure and temperature surveys were made with a traversing probe in the exhaust-nozzle flow for some confluent-flow configurations. The survey data at the mixing plane, plus the measured flow rates, were used to calculate the static-pressure variation along the exhaust nozzle length. The computed pressures compared well with experimental wall static-pressure data. External-flow surveys were made, for some confluent-flow configurations, with a large fixed rake at various locations in the exhaust plume.
Laboratory Computing Resource Center
Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low
Experimental Study of Hydraulic Systems Transient Response Characteristics
1978-12-01
of Filter .. ... ...... ..... ..... 28 Effects of Quincke -Tube. .. ..... ...... ... 28 Error ’Estimation. .. ... ...... ..... ..... 33 I. CONCLUSIONS...System With Quincke -Tube i Configuration ..... ..................... ... 11 6 Schematic of Pump System .... ............... ... 12 7 Example of Computer...Filter Configuration ........ ..................... 32 20 Transient Response, Reservoir System, Quincke -Tube (Short) Configuration, 505 PSIA
Analysis and Preliminary Design of an Advanced Technology Transport Flight Control System
NASA Technical Reports Server (NTRS)
Frazzini, R.; Vaughn, D.
1975-01-01
The analysis and preliminary design of an advanced technology transport aircraft flight control system using avionics and flight control concepts appropriate to the 1980-1985 time period are discussed. Specifically, the techniques and requirements of the flight control system were established, a number of candidate configurations were defined, and an evaluation of these configurations was performed to establish a recommended approach. Candidate configurations based on redundant integration of various sensor types, computational methods, servo actuator arrangements and data-transfer techniques were defined to the functional module and piece-part level. Life-cycle costs, for the flight control configurations, as determined in an operational environment model for 200 aircraft over a 15-year service life, were the basis of the optimum configuration selection tradeoff. The recommended system concept is a quad digital computer configuration utilizing a small microprocessor for input/output control, a hexad skewed set of conventional sensors for body rate and body acceleration, and triple integrated actuators.
Configuring Airspace Sectors with Approximate Dynamic Programming
NASA Technical Reports Server (NTRS)
Bloem, Michael; Gupta, Pramod
2010-01-01
In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.
Numerical Simulation of a High-Lift Configuration with Embedded Fluidic Actuators
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Casalino, Damiano; Lin, John C.; Appelbaum, Jason
2014-01-01
Numerical simulations have been performed for a vertical tail configuration with deflected rudder. The suction surface of the main element of this configuration is embedded with an array of 32 fluidic actuators that produce oscillating sweeping jets. Such oscillating jets have been found to be very effective for flow control applications in the past. In the current paper, a high-fidelity computational fluid dynamics (CFD) code known as the PowerFLOW(Registered TradeMark) code is used to simulate the entire flow field associated with this configuration, including the flow inside the actuators. The computed results for the surface pressure and integrated forces compare favorably with measured data. In addition, numerical solutions predict the correct trends in forces with active flow control compared to the no control case. Effect of varying yaw and rudder deflection angles are also presented. In addition, computations have been performed at a higher Reynolds number to assess the performance of fluidic actuators at flight conditions.
Waste receiving and processing plant control system; system design description
DOE Office of Scientific and Technical Information (OSTI.GOV)
LANE, M.P.
1999-02-24
The Plant Control System (PCS) is a heterogeneous computer system composed of numerous sub-systems. The PCS represents every major computer system that is used to support operation of the Waste Receiving and Processing (WRAP) facility. This document, the System Design Description (PCS SDD), includes several chapters and appendices. Each chapter is devoted to a separate PCS sub-system. Typically, each chapter includes an overview description of the system, a list of associated documents related to operation of that system, and a detailed description of relevant system features. Each appendice provides configuration information for selected PCS sub-systems. The appendices are designed asmore » separate sections to assist in maintaining this document due to frequent changes in system configurations. This document is intended to serve as the primary reference for configuration of PCS computer systems. The use of this document is further described in the WRAP System Configuration Management Plan, WMH-350, Section 4.1.« less
Using NetMeeting for remote configuration of the Otto Bock C-Leg: technical considerations.
Lemaire, E D; Fawcett, J A
2002-08-01
Telehealth has the potential to be a valuable tool for technical and clinical support of computer controlled prosthetic devices. This pilot study examined the use of Internet-based, desktop video conferencing for remote configuration of the Otto Bock C-Leg. Laboratory tests involved connecting two computers running Microsoft NetMeeting over a local area network (IP protocol). Over 56 Kbs(-1), DSL/Cable, and 10 Mbs(-1) LAN speeds, a prosthetist remotely configured a user's C-Leg by using Application Sharing, Live Video, and Live Audio. A similar test between sites in Ottawa and Toronto, Canada was limited by the notebook computer's 28 Kbs(-1) modem. At the 28 Kbs(-1) Internet-connection speed, NetMeeting's application sharing feature was not able to update the remote Sliders window fast enough to display peak toe loads and peak knee angles. These results support the use of NetMeeting as an accessible and cost-effective tool for remote C-Leg configuration, provided that sufficient Internet data transfer speed is available.
Considerations for Software Defined Networking (SDN): Approaches and use cases
NASA Astrophysics Data System (ADS)
Bakshi, K.
Software Defined Networking (SDN) is an evolutionary approach to network design and functionality based on the ability to programmatically modify the behavior of network devices. SDN uses user-customizable and configurable software that's independent of hardware to enable networked systems to expand data flow control. SDN is in large part about understanding and managing a network as a unified abstraction. It will make networks more flexible, dynamic, and cost-efficient, while greatly simplifying operational complexity. And this advanced solution provides several benefits including network and service customizability, configurability, improved operations, and increased performance. There are several approaches to SDN and its practical implementation. Among them, two have risen to prominence with differences in pedigree and implementation. This paper's main focus will be to define, review, and evaluate salient approaches and use cases of the OpenFlow and Virtual Network Overlay approaches to SDN. OpenFlow is a communication protocol that gives access to the forwarding plane of a network's switches and routers. The Virtual Network Overlay relies on a completely virtualized network infrastructure and services to abstract the underlying physical network, which allows the overlay to be mobile to other physical networks. This is an important requirement for cloud computing, where applications and associated network services are migrated to cloud service providers and remote data centers on the fly as resource demands dictate. The paper will discuss how and where SDN can be applied and implemented, including research and academia, virtual multitenant data center, and cloud computing applications. Specific attention will be given to the cloud computing use case, where automated provisioning and programmable overlay for scalable multi-tenancy is leveraged via the SDN approach.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa A.; Cliff, Susan E.; Wilcox, Floyd; Nemec, Marian; Bangert, Linda; Aftosmis, Michael J.; Parlette, Edward
2011-01-01
Accurate analysis of sonic boom pressure signatures using computational fluid dynamics techniques remains quite challenging. Although CFD shows accurate predictions of flow around complex configurations, generating grids that can resolve the sonic boom signature far away from the body is a challenge. The test case chosen for this study corresponds to an experimental wind-tunnel test that was conducted to measure the sonic boom pressure signature of a low boom configuration designed by Gulfstream Aerospace Corporation. Two widely used NASA codes, USM3D and AERO, are examined for their ability to accurately capture sonic boom signature. Numerical simulations are conducted for a free-stream Mach number of 1.6, angle of attack of 0.3 and Reynolds number of 3.85x10(exp 6) based on model reference length. Flow around the low boom configuration in free air and inside the Langley Unitary plan wind tunnel are computed. Results from the numerical simulations are compared with wind tunnel data. The effects of viscous and turbulence modeling along with tunnel walls on the computed sonic boom signature are presented and discussed.
Generic Divide and Conquer Internet-Based Computing
NASA Technical Reports Server (NTRS)
Radenski, Atanas; Follen, Gregory J. (Technical Monitor)
2001-01-01
The rapid growth of internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of new, internet-oriented software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high -performance computing applications community. The general goal of this research project is to contribute to better understanding of the transition to internet-based high -performance computing and to develop solutions for some of the difficulties of this transition. More specifically, our goal is to design an architecture for generic divide and conquer internet-based computing, to develop a portable implementation of this architecture, to create an example library of high-performance divide-and-conquer computing agents that run on top of this architecture, and to evaluate the performance of these agents. We have been designing an architecture that incorporates a master task-pool server and utilizes satellite computational servers that operate on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. Our designed architecture is intended to be complementary to and accessible from computational grids such as Globus, Legion, and Condor. Grids provide remote access to existing high-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end internet nodes. Our project is focused on a generic divide-and-conquer paradigm and its applications that operate on a loose and ever changing pool of lower-end internet nodes.
Computational fluid dynamics - The coming revolution
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1982-01-01
The development of aerodynamic theory is traced from the days of Aristotle to the present, with the next stage in computational fluid dynamics dependent on superspeed computers for flow calculations. Additional attention is given to the history of numerical methods inherent in writing computer codes applicable to viscous and inviscid analyses for complex configurations. The advent of the superconducting Josephson junction is noted to place configurational demands on computer design to avoid limitations imposed by the speed of light, and a Japanese projection of a computer capable of several hundred billion operations/sec is mentioned. The NASA Numerical Aerodynamic Simulator is described, showing capabilities of a billion operations/sec with a memory of 240 million words using existing technology. Near-term advances in fluid dynamics are discussed.
NASA Technical Reports Server (NTRS)
Marconi, F.; Salas, M.; Yaeger, L.
1976-01-01
A numerical procedure has been developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second order accurate finite difference scheme is used to integrate the three dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
Baker, Nancy A; Moehling, Krissy
2013-01-01
Awkward postures during computer use are assumed to be related to the fit between the worker and the workstation configuration, with greater mismatches leading to higher levels of musculoskeletal symptoms (MSS). The objective of this study was to examine if chronic MSS of the neck/shoulder, back, and wrist/hands was associated with 1) discrepancies between workstation setups and worker anthropometrics and 2) workers' postures. Secondary analysis on data collected from a randomized controlled cross-over design trial (N=74). Subjects' workstation configurations, baseline levels of MSS, working postures, and anthropometrics were measured. Correlations were completed to determine the association between postures and discrepancies between the worker anthropometrics and workstation configuration. Associations were examined between postures, workstation discrepancies and worker MSS. There were only 3 significant associations between worker posture and MSS, and 3 significant associations between discrepancies in worker/workstation set-up and MSS. The relationship between chronic MSS and the workers computer workstation configuration is multifactorial. While postures and the fit between the worker and workstation may be associated with MSS, other variables need to be explored to better understand the phenomenon.
Review of Enabling Technologies to Facilitate Secure Compute Customization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine
High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less
CFD Predictions for Transonic Performance of the ERA Hybrid Wing-Body Configuration
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Luckring, James M.; McMillin, S. Naomi; Flamm, Jeffrey D.; Roman, Dino
2016-01-01
A computational study was performed for a Hybrid Wing Body configuration that was focused at transonic cruise performance conditions. In the absence of experimental data, two fully independent computational fluid dynamics analyses were conducted to add confidence to the estimated transonic performance predictions. The primary analysis was performed by Boeing with the structured overset-mesh code OVERFLOW. The secondary analysis was performed by NASA Langley Research Center with the unstructured-mesh code USM3D. Both analyses were performed at full-scale flight conditions and included three configurations customary to drag buildup and interference analysis: a powered complete configuration, the configuration with the nacelle/pylon removed, and the powered nacelle in isolation. The results in this paper are focused primarily on transonic performance up to cruise and through drag rise. Comparisons between the CFD results were very good despite some minor geometric differences in the two analyses.
Space shuttle configuration accounting functional design specification
NASA Technical Reports Server (NTRS)
1974-01-01
An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
FOSS GIS on the GFZ HPC cluster: Towards a service-oriented Scientific Geocomputation Environment
NASA Astrophysics Data System (ADS)
Loewe, P.; Klump, J.; Thaler, J.
2012-12-01
High performance compute clusters can be used as geocomputation workbenches. Their wealth of resources enables us to take on geocomputation tasks which exceed the limitations of smaller systems. These general capabilities can be harnessed via tools such as Geographic Information System (GIS), provided they are able to utilize the available cluster configuration/architecture and provide a sufficient degree of user friendliness to allow for wide application. While server-level computing is clearly not sufficient for the growing numbers of data- or computation-intense tasks undertaken, these tasks do not get even close to the requirements needed for access to "top shelf" national cluster facilities. So until recently such kind of geocomputation research was effectively barred due to lack access to of adequate resources. In this paper we report on the experiences gained by providing GRASS GIS as a software service on a HPC compute cluster at the German Research Centre for Geosciences using Platform Computing's Load Sharing Facility (LSF). GRASS GIS is the oldest and largest Free Open Source (FOSS) GIS project. During ramp up in 2011, multiple versions of GRASS GIS (v 6.4.2, 6.5 and 7.0) were installed on the HPC compute cluster, which currently consists of 234 nodes with 480 CPUs providing 3084 cores. Nineteen different processing queues with varying hardware capabilities and priorities are provided, allowing for fine-grained scheduling and load balancing. After successful initial testing, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008) and allow to use all 3084 cores for GRASS based geocomputation work. However, in practice applications are limited to fewer resources as assigned to their respective queue. Applications of the new GIS functionality comprise so far of hydrological analysis, remote sensing and the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). This included the processing of complex problems, requiring significant amounts of processing time up to full 20 CPU days. This GRASS GIS-based service is provided as a research utility in the sense of "Software as a Service" (SaaS) and is a first step towards a GFZ corporate cloud service.
Active Flow Control in an Aggressive Transonic Diffuser
NASA Astrophysics Data System (ADS)
Skinner, Ryan W.; Jansen, Kenneth E.
2017-11-01
A diffuser exchanges upstream kinetic energy for higher downstream static pressure by increasing duct cross-sectional area. The resulting stream-wise and span-wise pressure gradients promote extensive separation in many diffuser configurations. The present computational work evaluates active flow control strategies for separation control in an asymmetric, aggressive diffuser of rectangular cross-section at inlet Mach 0.7 and Re 2.19M. Corner suction is used to suppress secondary flows, and steady/unsteady tangential blowing controls separation on both the single ramped face and the opposite flat face. We explore results from both Spalart-Allmaras RANS and DDES turbulence modeling frameworks; the former is found to miss key physics of the flow control mechanisms. Simulated baseline, steady, and unsteady blowing performance is validated against experimental data. Funding was provided by Northrop Grumman Corporation, and this research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.
Zhao, Ming; Rattanatamrong, Prapaporn; DiGiovanna, Jack; Mahmoudi, Babak; Figueiredo, Renato J; Sanchez, Justin C; Príncipe, José C; Fortes, José A B
2008-01-01
Dynamic data-driven brain-machine interfaces (DDDBMI) have great potential to advance the understanding of neural systems and improve the design of brain-inspired rehabilitative systems. This paper presents a novel cyberinfrastructure that couples in vivo neurophysiology experimentation with massive computational resources to provide seamless and efficient support of DDDBMI research. Closed-loop experiments can be conducted with in vivo data acquisition, reliable network transfer, parallel model computation, and real-time robot control. Behavioral experiments with live animals are supported with real-time guarantees. Offline studies can be performed with various configurations for extensive analysis and training. A Web-based portal is also provided to allow users to conveniently interact with the cyberinfrastructure, conducting both experimentation and analysis. New motor control models are developed based on this approach, which include recursive least square based (RLS) and reinforcement learning based (RLBMI) algorithms. The results from an online RLBMI experiment shows that the cyberinfrastructure can successfully support DDDBMI experiments and meet the desired real-time requirements.
TomoMiner and TomoMinerCloud: A software platform for large-scale subtomogram structural analysis
Frazier, Zachary; Xu, Min; Alber, Frank
2017-01-01
SUMMARY Cryo-electron tomography (cryoET) captures the 3D electron density distribution of macromolecular complexes in close to native state. With the rapid advance of cryoET acquisition technologies, it is possible to generate large numbers (>100,000) of subtomograms, each containing a macromolecular complex. Often, these subtomograms represent a heterogeneous sample due to variations in structure and composition of a complex in situ form or because particles are a mixture of different complexes. In this case subtomograms must be classified. However, classification of large numbers of subtomograms is a time-intensive task and often a limiting bottleneck. This paper introduces an open source software platform, TomoMiner, for large-scale subtomogram classification, template matching, subtomogram averaging, and alignment. Its scalable and robust parallel processing allows efficient classification of tens to hundreds of thousands of subtomograms. Additionally, TomoMiner provides a pre-configured TomoMinerCloud computing service permitting users without sufficient computing resources instant access to TomoMiners high-performance features. PMID:28552576
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Linczuk, Maciej
2016-09-01
The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.
On the Circulation Manifold for Two Adjacent Lifting Sections
NASA Technical Reports Server (NTRS)
Zannetti, Luca; Iollo, Angelo
1998-01-01
The circulation functional relative to two adjacent lifting sections is studied for two cases. In the first case we consider two adjacent circles. The circulation is computed as a function of the displacement of the secondary circle along the axis joining the two centers and of the angle of attack of the secondary circle, The gradient of such functional is computed by deriving a set of elliptic functions with respect both to their argument and to their Period. In the second case studied, we considered a wing-flap configuration. The circulation is computed by some implicit mappings, whose differentials with respect to the variation of the geometrical configuration in the physical space are found by divided differences. Configurations giving rise to local maxima and minima in the circulation manifold are presented.
Navier-Stokes Analysis of a High Wing Transport High-Lift Configuration with Externally Blown Flaps
NASA Technical Reports Server (NTRS)
Slotnick, Jeffrey P.; An, Michael Y.; Mysko, Stephen J.; Yeh, David T.; Rogers, Stuart E.; Roth, Karlin; Baker, M.David; Nash, S.
2000-01-01
Insights and lessons learned from the aerodynamic analysis of the High Wing Transport (HWT) high-lift configuration are presented. Three-dimensional Navier-Stokes CFD simulations using the OVERFLOW flow solver are compared with high Reynolds test data obtained in the NASA Ames 12 Foot Pressure Wind Tunnel (PWT) facility. Computational analysis of the baseline HWT high-lift configuration with and without Externally Blown Flap (EBF) jet effects is highlighted. Several additional aerodynamic investigations, such as nacelle strake effectiveness and wake vortex studies, are presented. Technical capabilities and shortcomings of the computational method are discussed and summarized.
Computing Lives And Reliabilities Of Turboprop Transmissions
NASA Technical Reports Server (NTRS)
Coy, J. J.; Savage, M.; Radil, K. C.; Lewicki, D. G.
1991-01-01
Computer program PSHFT calculates lifetimes of variety of aircraft transmissions. Consists of main program, series of subroutines applying to specific configurations, generic subroutines for analysis of properties of components, subroutines for analysis of system, and common block. Main program selects routines used in analysis and causes them to operate in desired sequence. Series of configuration-specific subroutines put in configuration data, perform force and life analyses for components (with help of generic component-property-analysis subroutines), fill property array, call up system-analysis routines, and finally print out results of analysis for system and components. Written in FORTRAN 77(IV).
Granovsky, Alexander A
2015-12-21
We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.
NASA Technical Reports Server (NTRS)
Magnus, A. E.; Epton, M. A.
1981-01-01
Panel aerodynamics (PAN AIR) is a system of computer programs designed to analyze subsonic and supersonic inviscid flows about arbitrary configurations. A panel method is a program which solves a linear partial differential equation by approximating the configuration surface by a set of panels. An overview of the theory of potential flow in general and PAN AIR in particular is given along with detailed mathematical formulations. Fluid dynamics, the Navier-Stokes equation, and the theory of panel methods were also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granovsky, Alexander A., E-mail: alex.granovsky@gmail.com
We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.
NASA Technical Reports Server (NTRS)
Jutte, Christine; Stanford, Bret K.
2014-01-01
This paper provides a brief overview of the state-of-the-art for aeroelastic tailoring of subsonic transport aircraft and offers additional resources on related research efforts. Emphasis is placed on aircraft having straight or aft swept wings. The literature covers computational synthesis tools developed for aeroelastic tailoring and numerous design studies focused on discovering new methods for passive aeroelastic control. Several new structural and material technologies are presented as potential enablers of aeroelastic tailoring, including selectively reinforced materials, functionally graded materials, fiber tow steered composite laminates, and various nonconventional structural designs. In addition, smart materials and structures whose properties or configurations change in response to external stimuli are presented as potential active approaches to aeroelastic tailoring.
Development of a 32-bit UNIX-based ELAS workstation
NASA Technical Reports Server (NTRS)
Spiering, Bruce A.; Pearson, Ronnie W.; Cheng, Thomas D.
1987-01-01
A mini/microcomputer UNIX-based image analysis workstation has been designed and is being implemented to use the Earth Resources Laboratory Applications Software (ELAS). The hardware system includes a MASSCOMP 5600 computer, which is a 32-bit UNIX-based system (compatible with AT&T System V and Berkeley 4.2 BSD operating system), a floating point accelerator, a 474-megabyte fixed disk, a tri-density magnetic tape drive, and an 1152 by 910 by 12-plane color graphics/image interface. The software conversion includes reconfiguring the ELAs driver Master Task, recompiling and then testing the converted application modules. This hardware and software configuration is a self-sufficient image analysis workstation which can be used as a stand-alone system, or networked with other compatible workstations.
NASA Astrophysics Data System (ADS)
Wang, Honghuan; Xing, Fangyuan; Yin, Hongxi; Zhao, Nan; Lian, Bizhan
2016-02-01
With the explosive growth of network services, the reasonable traffic scheduling and efficient configuration of network resources have an important significance to increase the efficiency of the network. In this paper, an adaptive traffic scheduling policy based on the priority and time window is proposed and the performance of this algorithm is evaluated in terms of scheduling ratio. The routing and spectrum allocation are achieved by using the Floyd shortest path algorithm and establishing a node spectrum resource allocation model based on greedy algorithm, which is proposed by us. The fairness index is introduced to improve the capability of spectrum configuration. The results show that the designed traffic scheduling strategy can be applied to networks with multicast and broadcast functionalities, and makes them get real-time and efficient response. The scheme of node spectrum configuration improves the frequency resource utilization and gives play to the efficiency of the network.
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
Integrating Xgrid into the HENP distributed computing model
NASA Astrophysics Data System (ADS)
Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.
2008-07-01
Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruwart, T M; Eldel, A
2000-01-01
The primary objectives of this project were to evaluate the performance of the SGI CXFS File System in a Storage Area Network (SAN) and compare/contrast it to the performance of a locally attached XFS file system on the same computer and storage subsystems. The University of Minnesota participants were asked to verify that the performance of the SAN/CXFS configuration did not fall below 85% of the performance of the XFS local configuration. There were two basic hardware test configurations constructed from the following equipment: Two Onyx 2 computer systems each with two Qlogic-based Fibre Channel/XIO Host Bus Adapter (HBA); Onemore » 8-Port Brocade Silkworm 2400 Fibre Channel Switch; and Four Ciprico RF7000 RAID Disk Arrays populated Seagate Barracuda 50GB disk drives. The Operating System on each of the ONYX 2 computer systems was IRIX 6.5.6. The first hardware configuration consisted of directly connecting the Ciprico arrays to the Qlogic controllers without the Brocade switch. The purpose for this configuration was to establish baseline performance data on the Qlogic controllers / Ciprico disk raw subsystem. This baseline performance data would then be used to demonstrate any performance differences arising from the addition of the Brocade Fibre Channel Switch. Furthermore, the performance of the Qlogic controllers could be compared to that of the older, Adaptec-based XIO dual-channel Fibre Channel adapters previously used on these systems. It should be noted that only raw device tests were performed on this configuration. No file system testing was performed on this configuration. The second hardware configuration introduced the Brocade Fibre Channel Switch. Two FC ports from each of the ONYX2 computer systems were attached to four ports of the switch and the four Ciprico arrays were attached to the remaining four. Raw disk subsystem tests were performed on the SAN configuration in order to demonstrate the performance differences between the direct-connect and the switched configurations. After this testing was completed, the Ciprico arrays were formatted with an XFS file system and performance numbers were gathered to establish a File System Performance Baseline. Finally, the disks were formatted with CXFS and further tests were run to demonstrate the performance of the CXFS file system. A summary of the results of these tests is given.« less
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
Detached-Eddy Simulations of Separated Flow Around Wings With Ice Accretions: Year One Report
NASA Technical Reports Server (NTRS)
Choo, Yung K. (Technical Monitor); Thompson, David; Mogili, Prasad
2004-01-01
A computational investigation was performed to assess the effectiveness of Detached-Eddy Simulation (DES) as a tool for predicting icing effects. The AVUS code, a public domain flow solver, was employed to compute solutions for an iced wing configuration using DES and steady Reynolds Averaged Navier-Stokes (RANS) equation methodologies. The configuration was an extruded GLC305/944-ice shape section with a rectangular planform. The model was mounted between two walls so no tip effects were considered. The numerical results were validated by comparison with experimental data for the same configuration. The time-averaged DES computations showed some improvement in lift and drag results near stall when compared to steady RANS results. However, comparisons of the flow field details did not show the level of agreement suggested by the integrated quantities. Based on our results, we believe that DES may prove useful in a limited sense to provide analysis of iced wing configurations when there is significant flow separation, e.g., near stall, where steady RANS computations are demonstrably ineffective. However, more validation is needed to determine what role DES can play as part of an overall icing effects prediction strategy. We conclude the report with an assessment of existing computational tools for application to the iced wing problem and a discussion of issues that merit further study.
A grid-embedding transonic flow analysis computer program for wing/nacelle configurations
NASA Technical Reports Server (NTRS)
Atta, E. H.; Vadyak, J.
1983-01-01
An efficient grid-interfacing zonal algorithm was developed for computing the three-dimensional transonic flow field about wing/nacelle configurations. the algorithm uses the full-potential formulation and the AF2 approximate factorization scheme. The flow field solution is computed using a component-adaptive grid approach in which separate grids are employed for the individual components in the multi-component configuration, where each component grid is optimized for a particular geometry such as the wing or nacelle. The wing and nacelle component grids are allowed to overlap, and flow field information is transmitted from one grid to another through the overlap region using trivariate interpolation. This report represents a discussion of the computational methods used to generate both the wing and nacelle component grids, the technique used to interface the component grids, and the method used to obtain the inviscid flow solution. Computed results and correlations with experiment are presented. also presented are discussions on the organization of the wing grid generation (GRGEN3) and nacelle grid generation (NGRIDA) computer programs, the grid interface (LK) computer program, and the wing/nacelle flow solution (TWN) computer program. Descriptions of the respective subroutines, definitions of the required input parameters, a discussion on interpretation of the output, and the sample cases illustrating application of the analysis are provided for each of the four computer programs.
Euler and Potential Experiment/CFD Correlations for a Transport and Two Delta-Wing Configurations
NASA Technical Reports Server (NTRS)
Hicks, R. M.; Cliff, S. E.; Melton, J. E.; Langhi, R. G.; Goodsell, A. M.; Robertson, D. D.; Moyer, S. A.
1990-01-01
A selection of successes and failures of Computational Fluid Dynamics (CFD) is discussed. Experiment/CFD correlations involving full potential and Euler computations of the aerodynamic characteristics of four commercial transport wings and two low aspect ratio, delta wing configurations are shown. The examples consist of experiment/CFD comparisons for aerodynamic forces, moments, and pressures. Navier-Stokes equations are not considered.
NASA Technical Reports Server (NTRS)
Keltner, D. J.
1975-01-01
The stowage list and hardware tracking system, a computer based information management system, used in support of the space shuttle orbiter stowage configuration and the Johnson Space Center hardware tracking is described. The input, processing, and output requirements that serve as a baseline for system development are defined.
Computing Trimmed, Mean-Camber Surfaces At Minimum Drag
NASA Technical Reports Server (NTRS)
Lamar, John E.; Hodges, William T.
1995-01-01
VLMD computer program determines subsonic mean-camber surfaces of trimmed noncoplanar planforms with minimum vortex drag at specified lift coefficient. Up to two planforms designed together. Method used that of subsonic vortex lattice method of chord loading specification, ranging from rectangular to triangular, left specified by user. Program versatile and applied to isolated wings, wing/canard configurations, tandem wing, and wing/-winglet configuration. Written in FORTRAN.
Aorta modeling with the element-based zero-stress state and isogeometric discretization
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Sasaki, Takafumi
2017-02-01
Patient-specific arterial fluid-structure interaction computations, including aorta computations, require an estimation of the zero-stress state (ZSS), because the image-based arterial geometries do not come from a ZSS. We have earlier introduced a method for estimation of the element-based ZSS (EBZSS) in the context of finite element discretization of the arterial wall. The method has three main components. 1. An iterative method, which starts with a calculated initial guess, is used for computing the EBZSS such that when a given pressure load is applied, the image-based target shape is matched. 2. A method for straight-tube segments is used for computing the EBZSS so that we match the given diameter and longitudinal stretch in the target configuration and the "opening angle." 3. An element-based mapping between the artery and straight-tube is extracted from the mapping between the artery and straight-tube segments. This provides the mapping from the arterial configuration to the straight-tube configuration, and from the estimated EBZSS of the straight-tube configuration back to the arterial configuration, to be used as the initial guess for the iterative method that matches the image-based target shape. Here we present the version of the EBZSS estimation method with isogeometric wall discretization. With isogeometric discretization, we can obtain the element-based mapping directly, instead of extracting it from the mapping between the artery and straight-tube segments. That is because all we need for the element-based mapping, including the curvatures, can be obtained within an element. With NURBS basis functions, we may be able to achieve a similar level of accuracy as with the linear basis functions, but using larger-size and much fewer elements. Higher-order NURBS basis functions allow representation of more complex shapes within an element. To show how the new EBZSS estimation method performs, we first present 2D test computations with straight-tube configurations. Then we show how the method can be used in a 3D computation where the target geometry is coming from medical image of a human aorta.
Light aircraft lift, drag, and moment prediction: A review and analysis
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summey, D. C.; Smith, N. S.; Carden, R. K.
1975-01-01
The historical development of analytical methods for predicting the lift, drag, and pitching moment of complete light aircraft configurations in cruising flight is reviewed. Theoretical methods, based in part on techniques described in the literature and in part on original work, are developed. These methods form the basis for understanding the computer programs given to: (1) compute the lift, drag, and moment of conventional airfoils, (2) extend these two-dimensional characteristics to three dimensions for moderate-to-high aspect ratio unswept wings, (3) plot complete configurations, (4) convert the fuselage geometric data to the correct input format, (5) compute the fuselage lift and drag, (6) compute the lift and moment of symmetrical airfoils to M = 1.0 by a simplified semi-empirical procedure, and (7) compute, in closed form, the pressure distribution over a prolate spheroid at alpha = 0. Comparisons of the predictions with experiment indicate excellent lift and drag agreement for conventional airfoils and wings. Limited comparisons of body-alone drag characteristics yield reasonable agreement. Also included are discussions for interference effects and techniques for summing the results above to obtain predictions for complete configurations.
Wolf, Thomas Gerhard; Paqué, Frank; Zeller, Maximilian; Willershausen, Brita; Briseño-Marroquín, Benjamín
2016-04-01
The aim of this study was to investigate the root canal system morphology of the mandibular first molar by means of micro-computed tomography. The root canal configuration, foramina, and accessory canals frequency of 118 mandibular first molars were investigated by means of micro-computed tomography and 3-dimensional software imaging. A 4-digit system describes the root canal configuration from the coronal to apical thirds and the main foramina number. The most frequent root canal configurations in mesial root were 2-2-2/2 (31.4%), 2-2-1/1 (15.3%), and 2-2-2/3 (11.9%); another 24 different root canal configurations were observed in this root. A 1-1-1/1 (58.5%), 1-1-1/2 (10.2%), and 16 other root canal configurations were observed in the distal root. The mesiobuccal root canal showed 1-4 foramina in 24.6%, and the mesiolingual showed 1-3 foramina in 28.0%. One connecting canal between the mesial root canals was observed in 30.5% and 2 in 3.4%. The distolingual root canal showed 1-4 foramina in 23.7%, whereas a foramen in the distobuccal root canal was rarely detected (3.4%). The mesiobuccal, mesiolingual, and distolingual root canals showed at least 1 accessory canal (14.3, 10.2, and 4.2%, respectively), but the distobuccal had none. The root canal configuration of mandibular first molars varies strongly. According to our expectations, both the mesial and distal roots showed a high number of morphologic diversifications. The root canal system of the mesial root showed more root canal configuration variations, connecting and accessory canals than the distal root. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
MX Systems Environmental Programs Scoping Summary.
1980-04-14
statemet o water resource conflicts o local growth impracts,, particularly loss of gialit-4 o preservation of archaeological and cultural resmew Date...health and safety o Archaeological and historical resources o Energy and nonrenewable resources o Terrestrial and aquatic biology o Air quality o...and regulations Public Health & Safety Noise; security configuration Archaeological /Historical Permitting and compliance with state/ Resources local
High order discretization techniques for real-space ab initio simulations
NASA Astrophysics Data System (ADS)
Anderson, Christopher R.
2018-03-01
In this paper, we present discretization techniques to address numerical problems that arise when constructing ab initio approximations that use real-space computational grids. We present techniques to accommodate the singular nature of idealized nuclear and idealized electronic potentials, and we demonstrate the utility of using high order accurate grid based approximations to Poisson's equation in unbounded domains. To demonstrate the accuracy of these techniques, we present results for a Full Configuration Interaction computation of the dissociation of H2 using a computed, configuration dependent, orbital basis set.
Simulation and evaluation of latent heat thermal energy storage
NASA Technical Reports Server (NTRS)
Sigmon, T. W.
1980-01-01
The relative value of thermal energy storage (TES) for heat pump storage (heating and cooling) as a function of storage temperature, mode of storage (hotside or coldside), geographic locations, and utility time of use rate structures were derived. Computer models used to simulate the performance of a number of TES/heat pump configurations are described. The models are based on existing performance data of heat pump components, available building thermal load computational procedures, and generalized TES subsystem design. Life cycle costs computed for each site, configuration, and rate structure are discussed.
A study of computer graphics technology in application of communication resource management
NASA Astrophysics Data System (ADS)
Li, Jing; Zhou, Liang; Yang, Fei
2017-08-01
With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.
Geant4 Computing Performance Benchmarking and Monitoring
Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...
2015-12-23
Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less
Grid Computing Environment using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Alanis, Fransisco; Mahmood, Akhtar
2003-10-01
Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
Sousa, Thiago Oliveira; Haiter-Neto, Francisco; Nascimento, Eduarda Helena Leandro; Peroni, Leonardo Vieira; Freitas, Deborah Queiroz; Hassan, Bassam
2017-07-01
The aim of this study was to assess the diagnostic accuracy of periapical radiography (PR) and cone-beam computed tomographic (CBCT) imaging in the detection of the root canal configuration (RCC) of human premolars. PR and CBCT imaging of 114 extracted human premolars were evaluated by 2 oral radiologists. RCC was recorded according to Vertucci's classification. Micro-computed tomographic imaging served as the gold standard to determine RCC. Accuracy, sensitivity, specificity, and predictive values were calculated. The Friedman test compared both PR and CBCT imaging with the gold standard. CBCT imaging showed higher values for all diagnostic tests compared with PR. Accuracy was 0.55 and 0.89 for PR and CBCT imaging, respectively. There was no difference between CBCT imaging and the gold standard, whereas PR differed from both CBCT and micro-computed tomographic imaging (P < .0001). CBCT imaging was more accurate than PR for evaluating different types of RCC individually. Canal configuration types III, VII, and "other" were poorly identified on CBCT imaging with a detection accuracy of 50%, 0%, and 43%, respectively. With PR, all canal configurations except type I were poorly visible. PR presented low performance in the detection of RCC in premolars, whereas CBCT imaging showed no difference compared with the gold standard. Canals with complex configurations were less identifiable using both imaging methods, especially PR. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Programming the Navier-Stokes computer: An abstract machine model and a visual editor
NASA Technical Reports Server (NTRS)
Middleton, David; Crockett, Tom; Tomboulian, Sherry
1988-01-01
The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)
1998-01-01
Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.
Analysis of Test Case Computations and Experiments for the First Aeroelastic Prediction Workshop
NASA Technical Reports Server (NTRS)
Schuster, David M.; Heeg, Jennifer; Wieseman, Carol D.; Chwalowski, Pawel
2013-01-01
This paper compares computational and experimental data from the Aeroelastic Prediction Workshop (AePW) held in April 2012. This workshop was designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems and to identify computational and experimental areas needing additional research and development. Three subject configurations were chosen from existing wind-tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations, and results from all of these computations were compared at the workshop.
A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steensland, Johan; Ray, Jaideep
2003-07-01
This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In manymore » cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.« less
Cytobank: providing an analytics platform for community cytometry data analysis and collaboration.
Chen, Tiffany J; Kotecha, Nikesh
2014-01-01
Cytometry is used extensively in clinical and laboratory settings to diagnose and track cell subsets in blood and tissue. High-throughput, single-cell approaches leveraging cytometry are developed and applied in the computational and systems biology communities by researchers, who seek to improve the diagnosis of human diseases, map the structures of cell signaling networks, and identify new cell types. Data analysis and management present a bottleneck in the flow of knowledge from bench to clinic. Multi-parameter flow and mass cytometry enable identification of signaling profiles of patient cell samples. Currently, this process is manual, requiring hours of work to summarize multi-dimensional data and translate these data for input into other analysis programs. In addition, the increase in the number and size of collaborative cytometry studies as well as the computational complexity of analytical tools require the ability to assemble sufficient and appropriately configured computing capacity on demand. There is a critical need for platforms that can be used by both clinical and basic researchers who routinely rely on cytometry. Recent advances provide a unique opportunity to facilitate collaboration and analysis and management of cytometry data. Specifically, advances in cloud computing and virtualization are enabling efficient use of large computing resources for analysis and backup. An example is Cytobank, a platform that allows researchers to annotate, analyze, and share results along with the underlying single-cell data.
The direction of cloud computing for Malaysian education sector in 21st century
NASA Astrophysics Data System (ADS)
Jaafar, Jazurainifariza; Rahman, M. Nordin A.; Kadir, M. Fadzil A.; Shamsudin, Syadiah Nor; Saany, Syarilla Iryani A.
2017-08-01
In 21st century, technology has turned learning environment into a new way of education to make learning systems more effective and systematic. Nowadays, education institutions are faced many challenges to ensure the teaching and learning process is running smoothly and manageable. Some of challenges in the current education management are lack of integrated systems, high cost of maintenance, difficulty of configuration and deployment as well as complexity of storage provision. Digital learning is an instructional practice that use technology to make learning experience more effective, provides education process more systematic and attractive. Digital learning can be considered as one of the prominent application that implemented under cloud computing environment. Cloud computing is a type of network resources that provides on-demands services where the users can access applications inside it at any location and no time border. It also promises for minimizing the cost of maintenance and provides a flexible of data storage capacity. The aim of this article is to review the definition and types of cloud computing for improving digital learning management as required in the 21st century education. The analysis of digital learning context focused on primary school in Malaysia. Types of cloud applications and services in education sector are also discussed in the article. Finally, gap analysis and direction of cloud computing in education sector for facing the 21st century challenges are suggested.
Recent applications of the transonic wing analysis computer code, TWING
NASA Technical Reports Server (NTRS)
Subramanian, N. R.; Holst, T. L.; Thomas, S. D.
1982-01-01
An evaluation of the transonic-wing-analysis computer code TWING is given. TWING utilizes a fully implicit approximate factorization iteration scheme to solve the full potential equation in conservative form. A numerical elliptic-solver grid-generation scheme is used to generate the required finite-difference mesh. Several wing configurations were analyzed, and the limits of applicability of this code was evaluated. Comparisons of computed results were made with available experimental data. Results indicate that the code is robust, accurate (when significant viscous effects are not present), and efficient. TWING generally produces solutions an order of magnitude faster than other conservative full potential codes using successive-line overrelaxation. The present method is applicable to a wide range of isolated wing configurations including high-aspect-ratio transport wings and low-aspect-ratio, high-sweep, fighter configurations.
Computational Aerodynamic Analysis of Offshore Upwind and Downwind Turbines
Zhao, Qiuying; Sheng, Chunhua; Afjeh, Abdollah
2014-01-01
Aerodynamic interactions of the model NREL 5 MW offshore horizontal axis wind turbines (HAWT) are investigated using a high-fidelity computational fluid dynamics (CFD) analysis. Four wind turbine configurations are considered; three-bladed upwind and downwind and two-bladed upwind and downwind configurations, which operate at two different rotor speeds of 12.1 and 16 RPM. In the present study, both steady and unsteady aerodynamic loads, such as the rotor torque, blade hub bending moment, and base the tower bending moment of the tower, are evaluated in detail to provide overall assessment of different wind turbine configurations. Aerodynamic interactions between the rotor and tower are analyzed,more » including the rotor wake development downstream. The computational analysis provides insight into aerodynamic performance of the upwind and downwind, two- and three-bladed horizontal axis wind turbines.« less
A resource management architecture based on complex network theory in cloud computing federation
NASA Astrophysics Data System (ADS)
Zhang, Zehua; Zhang, Xuejie
2011-10-01
Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.
On the Number of Non-equivalent Ancestral Configurations for Matching Gene Trees and Species Trees.
Disanto, Filippo; Rosenberg, Noah A
2017-09-14
An ancestral configuration is one of the combinatorially distinct sets of gene lineages that, for a given gene tree, can reach a given node of a specified species tree. Ancestral configurations have appeared in recursive algebraic computations of the conditional probability that a gene tree topology is produced under the multispecies coalescent model for a given species tree. For matching gene trees and species trees, we study the number of ancestral configurations, considered up to an equivalence relation introduced by Wu (Evolution 66:763-775, 2012) to reduce the complexity of the recursive probability computation. We examine the largest number of non-equivalent ancestral configurations possible for a given tree size n. Whereas the smallest number of non-equivalent ancestral configurations increases polynomially with n, we show that the largest number increases with [Formula: see text], where k is a constant that satisfies [Formula: see text]. Under a uniform distribution on the set of binary labeled trees with a given size n, the mean number of non-equivalent ancestral configurations grows exponentially with n. The results refine an earlier analysis of the number of ancestral configurations considered without applying the equivalence relation, showing that use of the equivalence relation does not alter the exponential nature of the increase with tree size.
NASA Technical Reports Server (NTRS)
Marconi, F.; Yaeger, L.
1976-01-01
A numerical procedure was developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second-order accurate finite difference scheme is used to integrate the three-dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine-Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures
NASA Astrophysics Data System (ADS)
Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.
2014-12-01
In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present preliminary results and discuss how DAWN can be evolved into a powerful tool for designing system architectures for data intensive science.
Automated Tetrahedral Mesh Generation for CFD Analysis of Aircraft in Conceptual Design
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Li, Wu; Campbell, Richard L.
2014-01-01
The paper introduces an automation process of generating a tetrahedral mesh for computational fluid dynamics (CFD) analysis of aircraft configurations in early conceptual design. The method was developed for CFD-based sonic boom analysis of supersonic configurations, but can be applied to aerodynamic analysis of aircraft configurations in any flight regime.
Miniature, mobile X-ray computed radiography system
Watson, Scott A; Rose, Evan A
2017-03-07
A miniature, portable x-ray system may be configured to scan images stored on a phosphor. A flash circuit may be configured to project red light onto a phosphor and receive blue light from the phosphor. A digital monochrome camera may be configured to receive the blue light to capture an article near the phosphor.
Windows VPN Set Up | High-Performance Computing | NREL
it in your My Documents folder Configure the client software using that conf file Start the TEXT NEEDED Configure the Client Software Start the Endian Connect App. You'll configure the connection using the hpcvpn-win.conf file, uncheck the "save password" link, and add your UserID. Start
NASA Technical Reports Server (NTRS)
Athavale, Mahesh; Przekwas, Andrzej
2004-01-01
The objectives of the program were to develop computational fluid dynamics (CFD) codes and simpler industrial codes for analyzing and designing advanced seals for air-breathing and space propulsion engines. The CFD code SCISEAL is capable of producing full three-dimensional flow field information for a variety of cylindrical configurations. An implicit multidomain capability allow the division of complex flow domains to allow optimum use of computational cells. SCISEAL also has the unique capability to produce cross-coupled stiffness and damping coefficients for rotordynamic computations. The industrial codes consist of a series of separate stand-alone modules designed for expeditious parametric analyses and optimization of a wide variety of cylindrical and face seals. Coupled through a Knowledge-Based System (KBS) that provides a user-friendly Graphical User Interface (GUI), the industrial codes are PC based using an OS/2 operating system. These codes were designed to treat film seals where a clearance exists between the rotating and stationary components. Leakage is inhibited by surface roughness, small but stiff clearance films, and viscous pumping devices. The codes have demonstrated to be a valuable resource for seal development of future air-breathing and space propulsion engines.
Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI
NASA Astrophysics Data System (ADS)
Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan
2016-10-01
Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.
Sensitivity analysis for the control of supersonic impinging jet noise
NASA Astrophysics Data System (ADS)
Nichols, Joseph W.; Hildebrand, Nathaniel
2016-11-01
The dynamics of a supersonic jet that impinges perpendicularly on a flat plate depend on complex interactions between fluid turbulence, shock waves, and acoustics. Strongly organized oscillations emerge, however, and they induce loud, often damaging, tones. We investigate this phenomenon using unstructured, high-fidelity Large Eddy Simulation (LES) and global stability analysis. Our flow configurations precisely match laboratory experiments with nozzle-to-wall distances of 4 and 4.5 jet diameters. We use multi-block shift-and-invert Arnoldi iteration to extract both direct and adjoint global modes that extend upstream into the nozzle. The frequency of the most unstable global mode agrees well with that of the emergent oscillations in the LES. We compute the "wavemaker" associated with this mode by multiplying it by its corresponding adjoint mode. The wavemaker shows that this instability is most sensitive to changes in the base flow slightly downstream of the nozzle exit. By modifying the base flow in this region, we then demonstrate that the flow can indeed be stabilized. This explains the success of microjets as an effective noise control measure when they are positioned around the nozzle lip. Computational resources were provided by the Argonne Leadership Computing Facility.
NASA Technical Reports Server (NTRS)
Steger, J. L.; Dougherty, F. C.; Benek, J. A.
1983-01-01
A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.
Improvements to information management systems simulator
NASA Technical Reports Server (NTRS)
Bilek, R. W.
1972-01-01
The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.
Hybrid annealing: Coupling a quantum simulator to a classical computer
NASA Astrophysics Data System (ADS)
Graß, Tobias; Lewenstein, Maciej
2017-05-01
Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Annealing strategies, either classical or quantum, explore the configuration space by evolving the system under the influence of thermal or quantum fluctuations. The thermal annealing dynamics can rapidly freeze the system into a low-energy configuration, and it can be simulated well on a classical computer, but it easily gets stuck in local minima. Quantum annealing, on the other hand, can be guaranteed to find the true ground state and can be implemented in modern quantum simulators; however, quantum adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here, we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such a hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configuration and enforces a lowering of the energy. We have simulated this algorithm for small instances of the random energy model, showing that it potentially outperforms both simulated thermal annealing and adiabatic quantum annealing. It becomes most efficient for problems involving many quasidegenerate ground states.
A Web-Based Development Environment for Collaborative Data Analysis
NASA Astrophysics Data System (ADS)
Erdmann, M.; Fischer, R.; Glaser, C.; Klingebiel, D.; Komm, M.; Müller, G.; Rieger, M.; Steggemann, J.; Urban, M.; Winchen, T.
2014-06-01
Visual Physics Analysis (VISPA) is a web-based development environment addressing high energy and astroparticle physics. It covers the entire analysis spectrum from the design and validation phase to the execution of analyses and the visualization of results. VISPA provides a graphical steering of the analysis flow, which consists of self-written, re-usable Python and C++ modules for more demanding tasks. All common operating systems are supported since a standard internet browser is the only software requirement for users. Even access via mobile and touch-compatible devices is possible. In this contribution, we present the most recent developments of our web application concerning technical, state-of-the-art approaches as well as practical experiences. One of the key features is the use of workspaces, i.e. user-configurable connections to remote machines supplying resources and local file access. Thereby, workspaces enable the management of data, computing resources (e.g. remote clusters or computing grids), and additional software either centralized or individually. We further report on the results of an application with more than 100 third-year students using VISPA for their regular particle physics exercises during the winter term 2012/13. Besides the ambition to support and simplify the development cycle of physics analyses, new use cases such as fast, location-independent status queries, the validation of results, and the ability to share analyses within worldwide collaborations with a single click become conceivable.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.
1994-01-01
Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.
Mixed-Fidelity Approach for Design of Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Li, Wu; Shields, Elwood; Geiselhart, Karl
2011-01-01
This paper documents a mixed-fidelity approach for the design of low-boom supersonic aircraft with a focus on fuselage shaping.A low-boom configuration that is based on low-fidelity analysis is used as the baseline. The fuselage shape is modified iteratively to obtain a configuration with an equivalent-area distribution derived from computational fluid dynamics analysis that attempts to match a predetermined low-boom target area distribution and also yields a low-boom ground signature. The ground signature of the final configuration is calculated by using a state-of-the-art computational-fluid-dynamics-based boom analysis method that generates accurate midfield pressure distributions for propagation to the ground with ray tracing. The ground signature that is propagated from a midfield pressure distribution has a shaped ramp front, which is similar to the ground signature that is propagated from the computational fluid dynamics equivalent-area distribution. This result supports the validity of low-boom supersonic configuration design by matching a low-boom equivalent-area target, which is easier to accomplish than matching a low-boom midfield pressure target.
The Cloud Area Padovana: from pilot to production
NASA Astrophysics Data System (ADS)
Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.
2017-10-01
The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.
A resource-sharing model based on a repeated game in fog computing.
Sun, Yan; Zhang, Nan
2017-03-01
With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.
NASA Astrophysics Data System (ADS)
Falkner, Katrina; Vivian, Rebecca
2015-10-01
To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.
Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W
2008-05-28
The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.
Design and optimization of organic rankine cycle for low temperature geothermal power plant
NASA Astrophysics Data System (ADS)
Barse, Kirtipal A.
Rising oil prices and environmental concerns have increased attention to renewable energy. Geothermal energy is a very attractive source of renewable energy. Although low temperature resources (90°C to 150°C) are the most common and most abundant source of geothermal energy, they were not considered economical and technologically feasible for commercial power generation. Organic Rankine Cycle (ORC) technology makes it feasible to use low temperature resources to generate power by using low boiling temperature organic liquids. The first hypothesis for this research is that using ORC is technologically and economically feasible to generate electricity from low temperature geothermal resources. The second hypothesis for this research is redesigning the ORC system for the given resource condition will improve efficiency along with improving economics. ORC model was developed using process simulator and validated with the data obtained from Chena Hot Springs, Alaska. A correlation was observed between the critical temperature of the working fluid and the efficiency for the cycle. Exergy analysis of the cycle revealed that the highest exergy destruction occurs in evaporator followed by condenser, turbine and working fluid pump for the base case scenarios. Performance of ORC was studied using twelve working fluids in base, Internal Heat Exchanger and turbine bleeding constrained and non-constrained configurations. R601a, R245ca, R600 showed highest first and second law efficiency in the non-constrained IHX configuration. The highest net power was observed for R245ca, R601a and R601 working fluids in the non-constrained base configuration. Combined heat exchanger area and size parameter of the turbine showed an increasing trend as the critical temperature of the working fluid decreased. The lowest levelized cost of electricity was observed for R245ca followed by R601a, R236ea in non-constrained base configuration. The next best candidates in terms of LCOE were R601a, R245ca and R600 in non-constrained IHX configuration. LCOE is dependent on net power and higher net power favors to lower the cost of electricity. Overall R245ca, R601, R601a, R600 and R236ea show better performance among the fluids studied. Non constrained configurations display better performance compared to the constrained configurations. Base non-constrained offered the highest net power and lowest LCOE.
Parallelisation study of a three-dimensional environmental flow model
NASA Astrophysics Data System (ADS)
O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank
2014-03-01
There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.
Environment overwhelms both nature and nurture in a model spin glass
NASA Astrophysics Data System (ADS)
Middleton, A. Alan; Yang, Jie
We are interested in exploring what information determines the particular history of the glassy long term dynamics in a disordered material. We study the effect of initial configurations and the realization of stochastic dynamics on the long time evolution of configurations in a two-dimensional Ising spin glass model. The evolution of nearest neighbor correlations is computed using patchwork dynamics, a coarse-grained numerical heuristic for temporal evolution. The dependence of the nearest neighbor spin correlations at long time on both initial spin configurations and noise histories are studied through cross-correlations of long-time configurations and the spin correlations are found to be independent of both. We investigate how effectively rigid bond clusters coarsen. Scaling laws are used to study the convergence of configurations and the distribution of sizes of nearly rigid clusters. The implications of the computational results on simulations and phenomenological models of spin glasses are discussed. We acknowledge NSF support under DMR-1410937 (CMMT program).
Sonic boom prediction for the Langley Mach 2 low-boom configuration
NASA Technical Reports Server (NTRS)
Madson, Michael D.
1992-01-01
Sonic boom pressure signatures and aerodynamic force data for the Langley Mach 2 low sonic boom configuration were computed using the TranAir full-potential code. A solution-adaptive Cartesian grid scheme is utilized to compute off-body flow field data. Computations were performed with and without nacelles at several angles of attack. Force and moment data were computed to measure nacelle effects on the aerodynamic characteristics and sonic boom footprints of the model. Pressure signatures were computed both on and off ground-track. Near-field pressure signature computations on ground-track were in good agreement with experimental data. Computed off ground-track signatures showed that maximum pressure peaks were located off ground-track and were significantly higher than the signatures on ground-track. Bow shocks from the nacelle inlets increased lift and drag, and also increased the magnitude of the maximum pressure both on and off ground-track.
NASA Technical Reports Server (NTRS)
Sforzini, R. H.
1972-01-01
An analysis and a computer program are presented which represent a compromise between the more sophisticated programs using precise burning geometric relations and the textbook type of solutions. The program requires approximately 900 computer cards including a set of 20 input data cards required for a typical problem. The computer operating time for a single configuration is approximately 1 minute and 30 seconds on the IBM 360 computer. About l minute and l5 seconds of the time is compilation time so that additional configurations input at the same time require approximately 15 seconds each. The program uses approximately 11,000 words on the IBM 360. The program is written in FORTRAN 4 and is readily adaptable for use on a number of different computers: IBM 7044, IBM 7094, and Univac 1108.
The revised solar array synthesis computer program
NASA Technical Reports Server (NTRS)
1970-01-01
The Revised Solar Array Synthesis Computer Program is described. It is a general-purpose program which computes solar array output characteristics while accounting for the effects of temperature, incidence angle, charged-particle irradiation, and other degradation effects on various solar array configurations in either circular or elliptical orbits. Array configurations may consist of up to 75 solar cell panels arranged in any series-parallel combination not exceeding three series-connected panels in a parallel string and no more than 25 parallel strings in an array. Up to 100 separate solar array current-voltage characteristics, corresponding to 100 equal-time increments during the sunlight illuminated portion of an orbit or any 100 user-specified combinations of incidence angle and temperature, can be computed and printed out during one complete computer execution. Individual panel incidence angles may be computed and printed out at the user's option.
SNAP-8 power conversion system design review
NASA Technical Reports Server (NTRS)
Lopez, L. P.
1970-01-01
The conceptual design of the SNAP-8 electrical generating system configurations are reviewed including the evolution of the PCS configuration, and the current concepts. The reliabilities of two alternative PCS-G heat rejection loop configurations with two radiator design concepts are also reviewed. A computer program for calculating system pressure loss using multiple-loop flow analysis is included.
Cloud Migration Experiment Configuration and Results
2017-12-01
ARL-TR-8248 ● DEC 2017 US Army Research Laboratory Cloud Migration Experiment Configuration and Results by Michael De Lucia...or reconstruction of the document. ARL-TR-8248 ● DEC 2017 US Army Research Laboratory Cloud Migration Experiment Configuration...and Results by Michael De Lucia Computational and Information Sciences Directorate, ARL Justin Wray and Steven S Collmann ICF International
1990-04-23
developed Ada Real - Time Operating System (ARTOS) for bare machine environments(Target), ACW 1.1I0. " ; - -M.UIECTTERMS Ada programming language, Ada...configuration) Operating System: CSC developed Ada Real - Time Operating System (ARTOS) for bare machine environments Memory Size: 4MB 2.2...Test Method Testing of the MC Ado V1.2.beta/ Concurrent Computer Corporation compiler and the CSC developed Ada Real - Time Operating System (ARTOS) for
NASA Technical Reports Server (NTRS)
Rhodes, J. A.; Tiwari, S. N.; Vonlavante, E.
1988-01-01
A comparison of flow separation in transonic flows is made using various computational schemes which solve the Euler and the Navier-Stokes equations of fluid mechanics. The flows examined are computed using several simple two-dimensional configurations including a backward facing step and a bump in a channel. Comparison of the results obtained using shock fitting and flux vector splitting methods are presented and the results obtained using the Euler codes are compared to results on the same configurations using a code which solves the Navier-Stokes equations.
CFD Prediction for Spin Rate of Fixed Canards on a Spinning Projectile
NASA Astrophysics Data System (ADS)
Ji, X. L.; Jia, Ch. Y.; Jiang, T. Y.
2011-09-01
A computational study performed for spin rate of fixed canards on a spinning projectile is presented in this paper. The cancards configurations provide challenges in terms of the determination of the aerodynamic forces and moments and the flow field changes which could have significant effect on the stability, performance, and corrected round accuracy. Advanced time accurate Navier-Stokes computations have been performed to compute the spin rate associated with the spinning motion of the cancards configurations at supersonic speed. The results show that roll-damping moment of cancards varies linearly with the spin rate at supersonic velocity.
NASA Technical Reports Server (NTRS)
Holland, Scott Douglas
1991-01-01
A combined computational and experimental parametric study of the internal aerodynamics of a generic three dimensional sidewall compression scramjet inlet configuration was performed. The study was designed to demonstrate the utility of computational fluid dynamics as a design tool in hypersonic inlet flow fields, to provide a detailed account of the nature and structure of the internal flow interactions, and to provide a comprehensive surface property and flow field database to determine the effects of contraction ratio, cowl position, and Reynolds number on the performance of a hypersonic scramjet inlet configuration.
Provider-Independent Use of the Cloud
NASA Astrophysics Data System (ADS)
Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron
Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.
A computer program for fitting smooth surfaces to three-dimensional aircraft configurations
NASA Technical Reports Server (NTRS)
Craidon, C. B.; Smith, R. E., Jr.
1975-01-01
A computer program developed to fit smooth surfaces to the component parts of three-dimensional aircraft configurations was described. The resulting equation definition of an aircraft numerical model is useful in obtaining continuous two-dimensional cross section plots in arbitrarily defined planes, local tangents, enriched surface plots and other pertinent geometric information; the geometry organization used as input to the program has become known as the Harris Wave Drag Geometry.
Modeling Pilot Behavior for Assessing Integrated Alert and Notification Systems on Flight Decks
NASA Technical Reports Server (NTRS)
Cover, Mathew; Schnell, Thomas
2010-01-01
Numerous new flight deck configurations for caution, warning, and alerts can be conceived; yet testing them with human-in-the-Ioop experiments to evaluate each one would not be practical. New sensors, instruments, and displays are being put into cockpits every day and this is particularly true as we enter the dawn of the Next Generation Air Transportation System (NextGen). By modeling pilot behavior in a computer simulation, an unlimited number of unique caution, warning, and alert configurations can be evaluated 24/7 by a computer. These computer simulations can then identify the most promising candidate formats to further evaluate in higher fidelity, but more costly, Human-in-the-Ioop (HITL) simulations. Evaluations using batch simulations with human performance models saves time, money, and enables a broader consideration of possible caution, warning, and alerting configurations for future flight decks.
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.
1998-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
NASA Technical Reports Server (NTRS)
Pepe, J. T.
1972-01-01
A functional design of software executive system for the space shuttle avionics computer is presented. Three primary functions of the executive are emphasized in the design: task management, I/O management, and configuration management. The executive system organization is based on the applications software and configuration requirements established during the Phase B definition of the Space Shuttle program. Although the primary features of the executive system architecture were derived from Phase B requirements, it was specified for implementation with the IBM 4 Pi EP aerospace computer and is expected to be incorporated into a breadboard data management computer system at NASA Manned Spacecraft Center's Information system division. The executive system was structured for internal operation on the IBM 4 Pi EP system with its external configuration and applications software assumed to the characteristic of the centralized quad-redundant avionics systems defined in Phase B.
Conformational Analysis on structural perturbations of the zinc finger NEMO
NASA Astrophysics Data System (ADS)
Godwin, Ryan; Salsbury, Freddie; Salsbury Group Team
2014-03-01
The NEMO (NF-kB Essential Modulator) Zinc Finger protein (2jvx) is a functional Ubiquitin-binding domain, and plays a role in signaling pathways for immune/inflammatory responses, apoptosis, and oncogenesis [Cordier et al., 2008]. Characterized by 3 cysteines and 1 histidine residue at the active site, the biologically occurring, bound zinc configuration is a stable structural motif. Perturbations of the zinc binding residues suggest conformational changes in the 423-atom protein characterized via analysis of all-atom molecular dynamics simulations. Structural perturbations include simulations with and without a zinc ion and with and without de-protonated cysteines, resulting in four distinct configurations. Simulations of various time scales show consistent results, yet the longest, GPU driven, microsecond runs show more drastic structural and dynamic fluctuations when compared to shorter duration time-scales. The last cysteine residue (26 of 28) and the helix on which it resides exhibit a secondary, locally unfolded conformation in addition to its normal bound conformation. Combined analytics elucidate how the presence of zinc and/or protonated cysteines impact the dynamics and energetic fluctuations of NEMO. Comprehensive Cancer Center of Wake Forest University Computational Biosciences shared resource supported by NCI CCSG P30CA012197.
Novel Duplicate Address Detection with Hash Function
Song, GuangJia; Ji, ZhenZhou
2016-01-01
Duplicate address detection (DAD) is an important component of the address resolution protocol (ARP) and the neighbor discovery protocol (NDP). DAD determines whether an IP address is in conflict with other nodes. In traditional DAD, the target address to be detected is broadcast through the network, which provides convenience for malicious nodes to attack. A malicious node can send a spoofing reply to prevent the address configuration of a normal node, and thus, a denial-of-service attack is launched. This study proposes a hash method to hide the target address in DAD, which prevents an attack node from launching destination attacks. If the address of a normal node is identical to the detection address, then its hash value should be the same as the “Hash_64” field in the neighboring solicitation message. Consequently, DAD can be successfully completed. This process is called DAD-h. Simulation results indicate that address configuration using DAD-h has a considerably higher success rate when under attack compared with traditional DAD. Comparative analysis shows that DAD-h does not require third-party devices and considerable computing resources; it also provides a lightweight security resolution. PMID:26991901
Miura, Yohei; Ichikawa, Katsuhiro; Fujimura, Ichiro; Hara, Takanori; Hoshino, Takashi; Niwa, Shinji; Funahashi, Masao
2018-03-01
The 320-detector row computed tomography (CT) system, i.e., the area detector CT (ADCT), can perform helical scanning with detector configurations of 4-, 16-, 32-, 64-, 80-, 100-, and 160-detector rows for routine CT examinations. This phantom study aimed to compare the quality of images obtained using helical scan mode with different detector configurations. The image quality was measured using modulation transfer function (MTF) and noise power spectrum (NPS). The system performance function (SP), based on the pre-whitening theorem, was calculated as MTF 2 /NPS, and compared between configurations. Five detector configurations, i.e., 0.5 × 16 mm (16 row), 0.5 × 64 mm (64 row), 0.5 × 80 mm (80 row), 0.5 × 100 mm (100 row), and 0.5 × 160 mm (160 row), were compared using a constant volume CT dose index (CTDI vol ) of 25 mGy, simulating the scan of an adult abdomen, and with a constant effective mAs value. The MTF was measured using the wire method, and the NPS was measured from images of a 20-cm diameter phantom with uniform content. The SP of 80-row configuration was the best, for the constant CTDI vol , followed by the 64-, 160-, 16-, and 100-row configurations. The decrease in the rate of the 100- and 160-row configurations from the 80-row configuration was approximately 30%. For the constant effective mAs, the SPs of the 100-row and 160-row configurations were significantly lower, compared with the other three detector configurations. The 80- and 64-row configurations were adequate in cases that required dose efficiency rather than scan speed.
dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.
Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.
2008-01-01
The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu. PMID:18509477
Future V/STOL Aircraft For The Pacific Basin
NASA Technical Reports Server (NTRS)
Albers, James A.; Zuk, John
1992-01-01
Report describes geography and transportation needs of Asian Pacific region, and describes aircraft configurations suitable for region and compares performances. Examines applications of high-speed rotorcraft, vertical/short-takeoff-and-landing (V/STOL) aircraft, and short-takeoff-and landing (STOL) aircraft. Configurations benefit commerce, tourism, and development of resources.
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
Methods for analysis of cracks in three-dimensional solids
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Analytical and numerical methods evaluating the stress-intensity factors for three-dimensional cracks in solids are presented, with reference to fatigue failure in aerospace structures. The exact solutions for embedded elliptical and circular cracks in infinite solids, and the approximate methods, including the finite-element, the boundary-integral equation, the line-spring models, and the mixed methods are discussed. Among the mixed methods, the superposition of analytical and finite element methods, the stress-difference, the discretization-error, the alternating, and the finite element-alternating methods are reviewed. Comparison of the stress-intensity factor solutions for some three-dimensional crack configurations showed good agreement. Thus, the choice of a particular method in evaluating the stress-intensity factor is limited only to the availability of resources and computer programs.
A High-Throughput Processor for Flight Control Research Using Small UAVs
NASA Technical Reports Server (NTRS)
Klenke, Robert H.; Sleeman, W. C., IV; Motter, Mark A.
2006-01-01
There are numerous autopilot systems that are commercially available for small (<100 lbs) UAVs. However, they all share several key disadvantages for conducting aerodynamic research, chief amongst which is the fact that most utilize older, slower, 8- or 16-bit microcontroller technologies. This paper describes the development and testing of a flight control system (FCS) for small UAV s based on a modern, high throughput, embedded processor. In addition, this FCS platform contains user-configurable hardware resources in the form of a Field Programmable Gate Array (FPGA) that can be used to implement custom, application-specific hardware. This hardware can be used to off-load routine tasks such as sensor data collection, from the FCS processor thereby further increasing the computational throughput of the system.
NASA Astrophysics Data System (ADS)
Betz, Jessie M. Bethly
1993-12-01
The Video Distribution Subsystem (VDS) for Space Station Freedom provides onboard video communications. The VDS includes three major functions: external video switching; internal video switching; and sync and control generation. The Video Subsystem Routing (VSR) is a part of the VDS Manager Computer Software Configuration Item (VSM/CSCI). The VSM/CSCI is the software which controls and monitors the VDS equipment. VSR activates, terminates, and modifies video services in response to Tier-1 commands to connect video sources to video destinations. VSR selects connection paths based on availability of resources and updates the video routing lookup tables. This project involves investigating the current methodology to automate the Video Subsystem Routing and developing and testing a prototype as 'proof of concept' for designers.
Notebook computer use on a desk, lap and lap support: effects on posture, performance and comfort.
Asundi, Krishna; Odell, Dan; Luce, Adam; Dennerlein, Jack T
2010-01-01
This study quantified postures of users working on a notebook computer situated in their lap and tested the effect of using a device designed to increase the height of the notebook when placed on the lap. A motion analysis system measured head, neck and upper extremity postures of 15 adults as they worked on a notebook computer placed on a desk (DESK), the lap (LAP) and a commercially available lapdesk (LAPDESK). Compared with the DESK, the LAP increased downwards head tilt 6 degrees and wrist extension 8 degrees . Shoulder flexion and ulnar deviation decreased 13 degrees and 9 degrees , respectively. Compared with the LAP, the LAPDESK decreased downwards head tilt 4 degrees , neck flexion 2 degrees , and wrist extension 9 degrees. Users reported less discomfort and difficulty in the DESK configuration. Use of the lapdesk improved postures compared with the lap; however, all configurations resulted in high values of wrist extension, wrist deviation and downwards head tilt. STATEMENT OF RELEVANCE: This study quantifies postures of users working with a notebook computer in typical portable configurations. A better understanding of the postures assumed during notebook computer use can improve usage guidelines to reduce the risk of musculoskeletal injuries.
Computer software configuration description, 241-AY and 241 AZ tank farm MICON automation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkelman, W.D.
This document describes the configuration process, choices and conventions used during the Micon DCS configuration activities, and issues involved in making changes to the configuration. Includes the master listings of the Tag definitions, which should be revised to authorize any changes. Revision 3 provides additional information on the software used to provide communications with the W-320 project and incorporates minor changes to ensure the document alarm setpoint priorities correctly match operational expectations.
Solving systems of linear equations by GPU-based matrix factorization in a Science Ground Segment
NASA Astrophysics Data System (ADS)
Legendre, Maxime; Schmidt, Albrecht; Moussaoui, Saïd; Lammers, Uwe
2013-11-01
Recently, Graphics Cards have been used to offload scientific computations from traditional CPUs for greater efficiency. This paper investigates the adaptation of a real-world linear system solver, which plays a central role in the data processing of the Science Ground Segment of ESA's astrometric Gaia mission. The paper quantifies the resource trade-offs between traditional CPU implementations and modern CUDA based GPU implementations. It also analyses the impact on the pipeline architecture and system development. The investigation starts from both a selected baseline algorithm with a reference implementation and a traditional linear system solver and then explores various modifications to control flow and data layout to achieve higher resource efficiency. It turns out that with the current state of the art, the modifications impact non-technical system attributes. For example, the control flow of the original modified Cholesky transform is modified so that locality of the code and verifiability deteriorate. The maintainability of the system is affected as well. On the system level, users will have to deal with more complex configuration control and testing procedures.
DIRAC distributed secure framework
NASA Astrophysics Data System (ADS)
Casajus, A.; Graciani, R.; LHCb DIRAC Team
2010-04-01
DIRAC, the LHCb community Grid solution, provides access to a vast amount of computing and storage resources to a large number of users. In DIRAC users are organized in groups with different needs and permissions. In order to ensure that only allowed users can access the resources and to enforce that there are no abuses, security is mandatory. All DIRAC services and clients use secure connections that are authenticated using certificates and grid proxies. Once a client has been authenticated, authorization rules are applied to the requested action based on the presented credentials. These authorization rules and the list of users and groups are centrally managed in the DIRAC Configuration Service. Users submit jobs to DIRAC using their local credentials. From then on, DIRAC has to interact with different Grid services on behalf of this user. DIRAC has a proxy management service where users upload short-lived proxies to be used when DIRAC needs to act on behalf of them. Long duration proxies are uploaded by users to a MyProxy service, and DIRAC retrieves new short delegated proxies when necessary. This contribution discusses the details of the implementation of this security infrastructure in DIRAC.
Sanad, Mohamed; Hassan, Noha
2014-01-01
A dual resonant antenna configuration is developed for multistandard multifunction mobile handsets and portable computers. Only two wideband resonant antennas can cover most of the LTE spectrums in portable communication equipment. The bandwidth that can be covered by each antenna exceeds 70% without using any matching or tuning circuits, with efficiencies that reach 80%. Thus, a dual configuration of them is capable of covering up to 39 LTE (4G) bands besides the existing 2G and 3G bands. 2×2 MIMO configurations have been also developed for the two wideband antennas with a maximum isolation and a minimum correlation coefficient between the primary and the diversity antennas.
Fractional Factorial Experiment Designs to Minimize Configuration Changes in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Cler, Daniel L.; Graham, Albert B.
2002-01-01
This paper serves as a tutorial to introduce the wind tunnel research community to configuration experiment designs that can satisfy resource constraints in a configuration study involving several variables, without arbitrarily eliminating any of them from the experiment initially. The special case of a configuration study featuring variables at two levels is examined in detail. This is the type of study in which each configuration variable has two natural states - 'on or off', 'deployed or not deployed', 'low or high', and so forth. The basic principles are illustrated by results obtained in configuration studies conducted in the Langley National Transonic Facility and in the ViGYAN Low Speed Tunnel in Hampton, Virginia. The crucial role of interactions among configuration variables is highlighted with an illustration of difficulties that can be encountered when they are not properly taken into account.
Acquisition of ICU data: concepts and demands.
Imhoff, M
1992-12-01
As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)
LANDSAT D instrument module study
NASA Technical Reports Server (NTRS)
1976-01-01
Spacecraft instrument module configurations which support an earth resource data gathering mission using a thematic mapper sensor were examined. The differences in size of these two experiments necessitated the development of two different spacecraft configurations. Following the selection of the best-suited configurations, a validation phase of design, analysis and modelling was conducted to verify feasibility. The chosen designs were then used to formulate definition for a systems weight, a cost range for fabrication and interface requirements for the thematic mapper (TM).
Lift Optimization Study of a Multi-Element Three-Segment Variable Camber Airfoil
NASA Technical Reports Server (NTRS)
Kaul, Upender K.; Nguyen, Nhan T.
2016-01-01
This paper reports a detailed computational high-lift study of the Variable Camber Continuous Trailing Edge Flap (VCCTEF) system carried out to explore the best VCCTEF designs, in conjunction with a leading edge flap called the Variable Camber Krueger (VCK), for take-off and landing. For this purpose, a three-segment variable camber airfoil employed as a performance adaptive aeroelastic wing shaping control effector for a NASA Generic Transport Model (GTM) in landing and take-off configurations is considered. The objective of the study is to define optimal high-lift VCCTEF settings and VCK settings/configurations. A total of 224 combinations of VCK settings/configurations and VCCTEF settings are considered for the inboard GTM wing, where the VCCTEFs are configured as a Fowler flap that forms a slot between the VCCTEF and the main wing. For the VCK settings of deflection angles of 55deg, 60deg and 65deg, 18, 19 and 19 vck configurations, respectively, were considered for each of the 4 different VCCTEF deflection settings. Different vck configurations were defined by varying the horizontal and vertical distance of the vck from the main wing. A computational investigation using a Reynolds-Averaged Navier-Stokes (RANS) solver was carried out to complement a wind-tunnel experimental study covering three of these configurations with the goal of identifying the most optimal high-lift configurations. Four most optimal high-lift configurations, corresponding to each of the VCK deflection settings, have been identified out of all the different configurations considered in this study yielding the highest lift performance.
Information management advanced development. Volume 1: Summary
NASA Technical Reports Server (NTRS)
Gerber, C. R.
1972-01-01
The information management systems designed for the modular space station are discussed. Subjects presented are: (1) communications terminal breadboard configuration, (2) digital data bus breadboard configuration, (3) data processing assembly definition, and (4) computer program (software) assembly definition.
NASA Astrophysics Data System (ADS)
Garzoglio, Gabriele; Levshina, Tanya; Rynge, Mats; Sehgal, Chander; Slyz, Marko
2012-12-01
The Open Science Grid (OSG) supports a diverse community of new and existing users in adopting and making effective use of the Distributed High Throughput Computing (DHTC) model. The LHC user community has deep local support within the experiments. For other smaller communities and individual users the OSG provides consulting and technical services through the User Support area. We describe these sometimes successful and sometimes not so successful experiences and analyze lessons learned that are helping us improve our services. The services offered include forums to enable shared learning and mutual support, tutorials and documentation for new technology, and troubleshooting of problematic or systemic failure modes. For new communities and users, we bootstrap their use of the distributed high throughput computing technologies and resources available on the OSG by following a phased approach. We first adapt the application and run a small production campaign on a subset of “friendly” sites. Only then do we move the user to run full production campaigns across the many remote sites on the OSG, adding to the community resources up to hundreds of thousands of CPU hours per day. This scaling up generates new challenges - like no determinism in the time to job completion, and diverse errors due to the heterogeneity of the configurations and environments - so some attention is needed to get good results. We cover recent experiences with image simulation for the Large Synoptic Survey Telescope (LSST), small-file large volume data movement for the Dark Energy Survey (DES), civil engineering simulation with the Network for Earthquake Engineering Simulation (NEES), and accelerator modeling with the Electron Ion Collider group at BNL. We will categorize and analyze the use cases and describe how our processes are evolving based on lessons learned.
Martins, Jorge N R
2014-06-01
The most common configuration of the maxillary first molar is the presence of three roots and four root canals, although the presence of several other configurations have already been reported. The objective of this work is to present a rare anatomic configuration with seven root canals diagnosed during an endodontic therapy. Endodontic treatment was performed using a dental operating microscope. Exploring the grooves surrounding the main canals with ultrasonic troughing was able expose unexpected root canals. Instrumentation with files of smaller sizes and tapers was performed to prevent root physical weakness. The anatomic configuration was confirmed with a Cone Beam Computer Tomography image analysis which was able to clearly show the presence of seven root canals. An electronic database search was conducted to identify all the published similar cases and the best techniques to approach them are discussed.
NASA Technical Reports Server (NTRS)
Bauer, Brent
1993-01-01
This paper discusses the development of a FORTRAN computer code to perform agility analysis on aircraft configurations. This code is to be part of the NASA-Ames ACSYNT (AirCraft SYNThesis) design code. This paper begins with a discussion of contemporary agility research in the aircraft industry and a survey of a few agility metrics. The methodology, techniques and models developed for the code are then presented. Finally, example trade studies using the agility module along with ACSYNT are illustrated. These trade studies were conducted using a Northrop F-20 Tigershark aircraft model. The studies show that the agility module is effective in analyzing the influence of common parameters such as thrust-to-weight ratio and wing loading on agility criteria. The module can compare the agility potential between different configurations. In addition, one study illustrates the module's ability to optimize a configuration's agility performance.
Parametric Study of Pulse-Combustor-Driven Ejectors at High-Pressure
NASA Technical Reports Server (NTRS)
Yungster, Shaye; Paxson, Daniel E.; Perkins, Hugh D.
2015-01-01
Pulse-combustor configurations developed in recent studies have demonstrated performance levels at high-pressure operating conditions comparable to those observed at atmospheric conditions. However, problems related to the way fuel was being distributed within the pulse combustor were still limiting performance. In the first part of this study, new configurations are investigated computationally aimed at improving the fuel distribution and performance of the pulse-combustor. Subsequent sections investigate the performance of various pulse-combustor driven ejector configurations operating at highpressure conditions, focusing on the effects of fuel equivalence ratio and ejector throat area. The goal is to design pulse-combustor-ejector configurations that maximize pressure gain while achieving a thermal environment acceptable to a turbine, and at the same time maintain acceptable levels of NOx emissions and flow non-uniformities. The computations presented here have demonstrated pressure gains of up to 2.8%.
Parametric Study of Pulse-Combustor-Driven Ejectors at High-Pressure
NASA Technical Reports Server (NTRS)
Yungster, Shaye; Paxson, Daniel E.; Perkins, Hugh D.
2015-01-01
Pulse-combustor configurations developed in recent studies have demonstrated performance levels at high-pressure operating conditions comparable to those observed at atmospheric conditions. However, problems related to the way fuel was being distributed within the pulse combustor were still limiting performance. In the first part of this study, new configurations are investigated computationally aimed at improving the fuel distribution and performance of the pulse-combustor. Subsequent sections investigate the performance of various pulse-combustor driven ejector configurations operating at high pressure conditions, focusing on the effects of fuel equivalence ratio and ejector throat area. The goal is to design pulse-combustor-ejector configurations that maximize pressure gain while achieving a thermal environment acceptable to a turbine, and at the same time maintain acceptable levels of NO(x) emissions and flow non-uniformities. The computations presented here have demonstrated pressure gains of up to 2.8.
Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Pfeil, Thomas; Potjans, Tobias C.; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists. PMID:22822388
NASA Astrophysics Data System (ADS)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.
2018-01-01
We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.
ERIC Educational Resources Information Center
Munoz-Organero, M.; Munoz-Merino, P. J.; Kloos, C. D.
2012-01-01
Teaching electrical and computer software engineers how to configure network services normally requires the detailed presentation of many configuration commands and their numerous parameters. Students tend to find it difficult to maintain acceptable levels of motivation. In many cases, this results in their not attending classes and not dedicating…
ERIC Educational Resources Information Center
Kay, Jack G.; And Others
1988-01-01
Describes two applications of the microcomputer for laboratory exercises. Explores radioactive decay using the Batemen equations on a Macintosh computer. Provides examples and screen dumps of data. Investigates polymer configurations using a Monte Carlo simulation on an IBM personal computer. (MVL)
Otani, Tomohiro; Ii, Satoshi; Shigematsu, Tomoyoshi; Fujinaka, Toshiyuki; Hirata, Masayuki; Ozaki, Tomohiko; Wada, Shigeo
2017-05-01
Coil embolization of cerebral aneurysms with inhomogeneous coil distribution leads to an incomplete occlusion of the aneurysm. However, the effects of this factor on the blood flow characteristics are still not fully understood. This study investigates the effects of coil configuration on the blood flow characteristics in a coil-embolized aneurysm using computational fluid dynamics (CFD) simulation. The blood flow analysis in the aneurysm with coil embolization was performed using a coil deployment (CD) model, in which the coil configuration was constructed using a physics-based simulation of the CD. In the CFD results, total flow momentum and kinetic energy in the aneurysm gradually decayed with increasing coil packing density (PD), regardless of the coil configuration attributed to deployment conditions. However, the total shear rate in the aneurysm was relatively high and the strength of the local shear flow varied based on the differences in coil configuration, even at adequate PDs used in clinical practice (20-25 %). Because the sufficient shear rate reduction is a well-known factor in the blood clot formation occluding the aneurysm inside, the present study gives useful insight into the effects of coil configuration on the treatment efficiency of coil embolization.
Duct flow nonuniformities study for space shuttle main engine
NASA Technical Reports Server (NTRS)
Thoenes, J.
1985-01-01
To improve the Space Shuttle Main Engine (SSME) design and for future use in the development of generation rocket engines, a combined experimental/analytical study was undertaken with the goals of first, establishing an experimental data base for the flow conditions in the SSME high pressure fuel turbopump (HPFTP) hot gas manifold (HGM) and, second, setting up a computer model of the SSME HGM flow field. Using the test data to verify the computer model it should be possible in the future to computationally scan contemplated advanced design configurations and limit costly testing to the most promising design. The effort of establishing and using the computer model is detailed. The comparison of computational results and experimental data observed clearly demonstrate that computational fluid mechanics (CFD) techniques can be used successfully to predict the gross features of three dimensional fluid flow through configurations as intricate as the SSME turbopump hot gas manifold.
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.
NASA Technical Reports Server (NTRS)
Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.
1959-01-01
A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.
Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources
NASA Astrophysics Data System (ADS)
Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.
2011-12-01
Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.
Statistics Online Computational Resource for Education
ERIC Educational Resources Information Center
Dinov, Ivo D.; Christou, Nicolas
2009-01-01
The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)
Elastic stability of DNA configurations. II. Supercoiled plasmids with self-contact
NASA Astrophysics Data System (ADS)
Coleman, Bernard D.; Swigon, David; Tobias, Irwin
2000-01-01
Configurations of protein-free DNA miniplasmids are calculated with the effects of impenetrability and self-contact forces taken into account by using exact solutions of Kirchhoff's equations of equilibrium for elastic rods of circular cross section. Bifurcation diagrams are presented as graphs of excess link, ΔL, versus writhe, W, and the stability criteria derived in paper I of this series are employed in a search for regions of such diagrams that correspond to configurations that are stable, in the sense that they give local minima to elastic energy. Primary bifurcation branches that originate at circular configurations are composed of configurations with Dm symmetry (m=2,3,...). Among the results obtained are the following. (i) There are configurations with C2 symmetry forming secondary bifurcation branches which emerge from the primary branch with m=3, and bifurcation of such secondary branches gives rise to tertiary branches of configurations without symmetry. (ii) Whether or not self-contact occurs, a noncircular configuration in the primary branch with m=2, called branch α, is stable when for it the derivative dΔL/dW, computed along that branch, is strictly positive. (iii) For configurations not in α, the condition dΔL/dW>0 is not sufficient for stability; in fact, each nonplanar contact-free configuration that is in a branch other than α is unstable. A rule relating the number of points of self-contact and the occurrence of intervals of such contact to the magnitude of ΔL, which in paper I was found to hold for segments of DNA subject to strong anchoring end conditions, is here observed to hold for computed configurations of protein-free miniplasmids.
An Architecture for Cross-Cloud System Management
NASA Astrophysics Data System (ADS)
Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad
The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.
An Integrated Method for Airfoil Optimization
NASA Astrophysics Data System (ADS)
Okrent, Joshua B.
Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal operational conditions from a broad design space with the use of minimal computational resources on both an absolute and relative scale to traditional analysis techniques. Aerodynamicists, program managers, aircraft configuration specialist, and anyone else in charge of aircraft configuration, design studies, and program level decisions might find the evaluation and optimization method proposed of interest.
Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious
ERIC Educational Resources Information Center
Cirasella, Jill
2009-01-01
This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…
Rotation Reveals the Importance of Configural Cues in Handwritten Word Perception
Barnhart, Anthony S.; Goldinger, Stephen D.
2013-01-01
A dramatic perceptual asymmetry occurs when handwritten words are rotated 90° in either direction. Those rotated in a direction consistent with their natural tilt (typically clockwise) become much more difficult to recognize, relative to those rotated in the opposite direction. In Experiment 1, we compared computer-printed and handwritten words, all equated for degrees of leftward and rightward tilt, and verified the phenomenon: The effect of rotation was far larger for cursive words, especially when rotated in a tilt-consistent direction. In Experiment 2, we replicated this pattern with all items presented in visual noise. In both experiments, word frequency effects were larger for computer-printed words and did not interact with rotation. The results suggest that handwritten word perception requires greater configural processing, relative to computer print, because handwritten letters are variable and ambiguous. When words are rotated, configural processing suffers, particularly when rotation exaggerates natural tilt. Our account is similar to theories of the “Thatcher Illusion,” wherein face inversion disrupts holistic processing. Together, the findings suggest that configural, word-level processing automatically increases when people read handwriting, as letter-level processing becomes less reliable. PMID:23589201
NASA Technical Reports Server (NTRS)
Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.
1988-01-01
A computer program, the Propeller Nacelle Aerodynamic Performance Prediction Analysis (PANPER), was developed for the prediction and analysis of the performance and airflow of propeller-nacelle configurations operating over a forward speed range inclusive of high speed flight typical of recent propfan designs. A propeller lifting line, wake program was combined with a compressible, viscous center body interaction program, originally developed for diffusers, to compute the propeller-nacelle flow field, blade loading distribution, propeller performance, and the nacelle forebody pressure and viscous drag distributions. The computer analysis is applicable to single and coaxial counterrotating propellers. The blade geometries can include spanwise variations in sweep, droop, taper, thickness, and airfoil section type. In the coaxial mode of operation the analysis can treat both equal and unequal blade number and rotational speeds on the propeller disks. The nacelle portion of the analysis can treat both free air and tunnel wall configurations including wall bleed. The analysis was applied to many different sets of flight conditions using selected aerodynamic modeling options. The influence of different propeller nacelle-tunnel wall configurations was studied. Comparisons with available test data for both single and coaxial propeller configurations are presented along with a discussion of the results.
Support System Effects on the DLR-F6 Transport Configuration in the National Transonic Facility
NASA Technical Reports Server (NTRS)
Rivers, Melissa B.; Hunter, Craig A.; Gatlin, Gregory M.
2009-01-01
An experimental investigation of the DLR-F6 generic transport configuration was conducted in the NASA NTF for use in the Drag Prediction Workshop. As data from this experimental investigation was collected, a large difference in drag values was seen between the NTF test and an ONERA test that was conducted several years ago. After much investigation, it was determined that this difference was likely due to a sting effect correction applied to the ONERA data which NTF does not use. This insight led to the present work. In this study, a computational assessment has been undertaken to investigate model support system interference effects on the DLR-F6 transport configuration. The configurations computed during this investigation were the isolated wing-body, the wing-body with the full support system (blade and sting), the wing-body with just the blade, and the wing-body with just the sting. The results from this investigation show the same trends that ONERA saw when they conducted a similar experimental investigation in the S2MA tunnel. Computational results suggest that the blade contributed an interference type of effect, the sting contributed a general blockage effect, and the full support system combined these effects.
A Computing Infrastructure for Supporting Climate Studies
NASA Astrophysics Data System (ADS)
Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team
2011-12-01
Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.
Towards a Multi-Mission, Airborne Science Data System Environment
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Hardman, S.; Law, E.; Freeborn, D.; Kay-Im, E.; Lau, G.; Oswald, J.
2011-12-01
NASA earth science instruments are increasingly relying on airborne missions. However, traditionally, there has been limited common infrastructure support available to principal investigators in the area of science data systems. As a result, each investigator has been required to develop their own computing infrastructures for the science data system. Typically there is little software reuse and many projects lack sufficient resources to provide a robust infrastructure to capture, process, distribute and archive the observations acquired from airborne flights. At NASA's Jet Propulsion Laboratory (JPL), we have been developing a multi-mission data system infrastructure for airborne instruments called the Airborne Cloud Computing Environment (ACCE). ACCE encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation. This includes improving data system interoperability across each instrument. A principal characteristic is being able to provide an agile infrastructure that is architected to allow for a variety of configurations of the infrastructure from locally installed compute and storage services to provisioning those services via the "cloud" from cloud computer vendors such as Amazon.com. Investigators often have different needs that require a flexible configuration. The data system infrastructure is built on the Apache's Object Oriented Data Technology (OODT) suite of components which has been used for a number of spaceborne missions and provides a rich set of open source software components and services for constructing science processing and data management systems. In 2010, a partnership was formed between the ACCE team and the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to support the data processing and data management needs. A principal goal is to provide support for the Fourier Transform Spectrometer (FTS) instrument which will produce over 700,000 soundings over the life of their three-year mission. The cost to purchase and operate a cluster-based system in order to generate Level 2 Full Physics products from this data was prohibitive. Through an evaluation of cloud computing solutions, Amazon's Elastic Compute Cloud (EC2) was selected for the CARVE deployment. As the ACCE infrastructure is developed and extended to form an infrastructure for airborne missions, the experience of working with CARVE has provided a number of lessons learned and has proven to be important in reinforcing the unique aspects of airborne missions and the importance of the ACCE infrastructure in developing a cost effective, flexible multi-mission capability that leverages emerging capabilities in cloud computing, workflow management, and distributed computing.
NASA Technical Reports Server (NTRS)
Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony
1996-01-01
This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.
Opal web services for biomedical applications.
Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W
2010-07-01
Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.
Cloud-Based Tools to Support High-Resolution Modeling (Invited)
NASA Astrophysics Data System (ADS)
Jones, N.; Nelson, J.; Swain, N.; Christensen, S.
2013-12-01
The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482
Computational Fluid Dynamics Analysis of Thoracic Aortic Dissection
NASA Astrophysics Data System (ADS)
Tang, Yik; Fan, Yi; Cheng, Stephen; Chow, Kwok
2011-11-01
Thoracic Aortic Dissection (TAD) is a cardiovascular disease with high mortality. An aortic dissection is formed when blood infiltrates the layers of the vascular wall, and a new artificial channel, the false lumen, is created. The expansion of the blood vessel due to the weakened wall enhances the risk of rupture. Computational fluid dynamics analysis is performed to study the hemodynamics of this pathological condition. Both idealized geometry and realistic patient configurations from computed tomography (CT) images are investigated. Physiological boundary conditions from in vivo measurements are employed. Flow configuration and biomechanical forces are studied. Quantitative analysis allows clinicians to assess the risk of rupture in making decision regarding surgical intervention.
Configural learning in contextual cuing of visual search.
Beesley, Tom; Vadillo, Miguel A; Pearson, Daniel; Shanks, David R
2016-08-01
Two experiments were conducted to explore the role of configural representations in contextual cuing of visual search. Repeating patterns of distractors (contexts) were trained incidentally as predictive of the target location. Training participants with repeating contexts of consistent configurations led to stronger contextual cuing than when participants were trained with contexts of inconsistent configurations. Computational simulations with an elemental associative learning model of contextual cuing demonstrated that purely elemental representations could not account for the results. However, a configural model of associative learning was able to simulate the ordinal pattern of data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Korkan, Kenneth D.; Eagleson, Lisa A.; Griffiths, Robert C.
1991-01-01
Current research in the area of advanced propeller configurations for performance and acoustics are briefly reviewed. Particular attention is given to the techniques of Lock and Theodorsen modified for use in the design of counterrotating propeller configurations; a numerical method known as SSTAGE, which is a Euler solver for the unducted fan concept; the NASPROP-E numerical analysis also based on a Euler solver and used to study the near acoustic fields for the SR series propfan configurations; and a counterrotating propeller test rig designed to obtain an experimental performance/acoustic data base for various propeller configurations.
CernVM WebAPI - Controlling Virtual Machines from the Web
NASA Astrophysics Data System (ADS)
Charalampidis, I.; Berzano, D.; Blomer, J.; Buncic, P.; Ganis, G.; Meusel, R.; Segal, B.
2015-12-01
Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.
NASA Astrophysics Data System (ADS)
Stirling, Shannon; Kim, Hye-Young
Alpha-tocopherol-ascorbic acid surfactant (EC) is a novel amphiphilic molecule of antioxidant properties, which has a hydrophobic vitamin E and a hydrophilic vitamin C chemically linked. We have developed atomistic force fields (g54a7) for a protonated (neutral) EC molecule. Our goal is to carry out molecular dynamics (MD) simulations of protonated EC molecules using the newly developed force fields and study the molecular properties. First we ran energy minimization (EM) with one molecule in a vacuum to obtain the low energy molecular configuration with emtol =10. We then used Packmol to insert 125 EC molecules in a 3nm cube. We then performed MD simulations of the bulk system composed of 125 EC molecules, from which we measured the bulk density and the evaporation energy of the molecular system. Gromacs2016 is used for the EM and MD simulation studies. We will present the results of the ongoing research. National Institute Of General Medical Sciences of the National Institutes of Health under Award Number P20GM103424 (Kim). Computational resources were provided by the Louisiana Optical Network Initiative.
Scalable and cost-effective NGS genotyping in the cloud.
Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P
2015-10-15
While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.
Simulation and stability analysis of supersonic impinging jet noise with microjet control
NASA Astrophysics Data System (ADS)
Hildebrand, Nathaniel; Nichols, Joseph W.
2014-11-01
A model for an ideally expanded 1.5 Mach turbulent jet impinging on a flat plate using unstructured high-fidelity large eddy simulations (LES) and hydrodynamic stability analysis is presented. Note the LES configuration conforms exactly to experiments performed at the STOVL supersonic jet facility of the Florida Center for Advanced Aero-Propulsion allowing validation against experimental measurements. The LES are repeated for different nozzle-wall separation distances as well as with and without the addition of sixteen microjets positioned uniformly around the nozzle lip. For some nozzle-wall distances, but not all, the microjets result in substantial noise reduction. Observations of substantial noise reduction are associated with a relative absence of large-scale coherent vortices in the jet shear layer. To better understand and predict the effectiveness of microjet noise control, the application of global stability analysis about LES mean fields is used to extract axisymmetric and helical instability modes connected to the complex interplay between the coherent vortices, shocks, and acoustic feedback. We gratefully acknowledge computational resources provided by the Argonne Leadership Computing Facility.
NASA Technical Reports Server (NTRS)
VanDalsem, William R.; Livingston, Mary E.; Melton, John E.; Torres, Francisco J.; Stremel, Paul M.
1999-01-01
Continuous improvement of aerospace product development processes is a driving requirement across much of the aerospace community. As up to 90% of the cost of an aerospace product is committed during the first 10% of the development cycle, there is a strong emphasis on capturing, creating, and communicating better information (both requirements and performance) early in the product development process. The community has responded by pursuing the development of computer-based systems designed to enhance the decision-making capabilities of product development individuals and teams. Recently, the historical foci on sharing the geometrical representation and on configuration management are being augmented: Physics-based analysis tools for filling the design space database; Distributed computational resources to reduce response time and cost; Web-based technologies to relieve machine-dependence; and Artificial intelligence technologies to accelerate processes and reduce process variability. Activities such as the Advanced Design Technologies Testbed (ADTT) project at NASA Ames Research Center study the strengths and weaknesses of the technologies supporting each of these trends, as well as the overall impact of the combination of these trends on a product development event. Lessons learned and recommendations for future activities will be reported.
NASA Technical Reports Server (NTRS)
Hu, Chaumin
2007-01-01
IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.
Performance of VPIC on Trinity
NASA Astrophysics Data System (ADS)
Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Li, H.; Nam, H. A.; Pang, X.; Rust, W. N., III; Wohlbier, J.; Yin, L.; Albright, B. J.
2016-10-01
Trinity is a new major DOE computing resource which is going through final acceptance testing at Los Alamos National Laboratory. Trinity has several new and unique architectural features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes. Additional unique features include use of on package high bandwidth memory (HBM) for the KNL nodes, the ability to configure the KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to port and optimize VPIC to Trinity and evaluate its performance. Because VPIC was recently released as Open Source, it is being used as part of acceptance testing for Trinity and is participating in the Trinity Open Science Program which has resulted in excellent collaboration activities with both Cray and Intel. Results of this work will be presented on performance of VPIC on both Haswell and KNL partitions for both single node runs and runs at scale. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.