Naver: a PC-cluster-based VR system
NASA Astrophysics Data System (ADS)
Park, ChangHoon; Ko, HeeDong; Kim, TaiYun
2003-04-01
In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.
Web Service Distributed Management Framework for Autonomic Server Virtualization
NASA Astrophysics Data System (ADS)
Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea
Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.
Dynamically allocated virtual clustering management system
NASA Astrophysics Data System (ADS)
Marcus, Kelvin; Cannata, Jess
2013-05-01
The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.
2000-04-01
be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-25
The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Lawrence, Daphne
2009-03-01
Blade servers and virtualization can reduce infrastructure, maintenance, heating, electric, cooling and equipment costs. Blade server technology is evolving and some elements may become obsolete. There is very little interoperability between blades. Hospitals can virtualize 40 to 60 percent of their servers, and old servers can be reused for testing. Not all applications lend themselves to virtualization--especially those with high memory requirements. CIOs should engage their vendors in virtualization discussions.
Free Factories: Unified Infrastructure for Data Intensive Web Services
Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.
2010-01-01
We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356
Evolution of the architecture of the ATLAS Metadata Interface (AMI)
NASA Astrophysics Data System (ADS)
Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.
2015-12-01
The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.
Designing communication and remote controlling of virtual instrument network system
NASA Astrophysics Data System (ADS)
Lei, Lin; Wang, Houjun; Zhou, Xue; Zhou, Wenjian
2005-01-01
In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful.
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
NASA Astrophysics Data System (ADS)
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd
2016-10-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.
The effective use of virtualization for selection of data centers in a cloud computing environment
NASA Astrophysics Data System (ADS)
Kumar, B. Santhosh; Parthiban, Latha
2018-04-01
Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.
Virtual reality for spherical images
NASA Astrophysics Data System (ADS)
Pilarczyk, Rafal; Skarbek, Władysław
2017-08-01
Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.
A framework using cluster-based hybrid network architecture for collaborative virtual surgery.
Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann
2009-12-01
Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.
Load Balancing in Distributed Web Caching: A Novel Clustering Approach
NASA Astrophysics Data System (ADS)
Tiwari, R.; Kumar, K.; Khan, G.
2010-11-01
The World Wide Web suffers from scaling and reliability problems due to overloaded and congested proxy servers. Caching at local proxy servers helps, but cannot satisfy more than a third to half of requests; more requests are still sent to original remote origin servers. In this paper we have developed an algorithm for Distributed Web Cache, which incorporates cooperation among proxy servers of one cluster. This algorithm uses Distributed Web Cache concepts along with static hierarchies with geographical based clusters of level one proxy server with dynamic mechanism of proxy server during the congestion of one cluster. Congestion and scalability problems are being dealt by clustering concept used in our approach. This results in higher hit ratio of caches, with lesser latency delay for requested pages. This algorithm also guarantees data consistency between the original server objects and the proxy cache objects.
Design and implementation of streaming media server cluster based on FFMpeg.
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.
Design and Implementation of Streaming Media Server Cluster Based on FFMpeg
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187
Effect of video server topology on contingency capacity requirements
NASA Astrophysics Data System (ADS)
Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.
1996-03-01
Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.
A study of factors affecting the adoption of server virtualization technology
NASA Astrophysics Data System (ADS)
Lu, Hsin-Ke; Lin, Peng-Chun; Chiang, Chang-Heng; Cho, Chien-An
2018-04-01
It has become a trend that worldwide enterprises and organizations apply new technologies to improve their operations; besides, it has higher cost and less flexibility to construct and manage traditional servers, therefore the current mainstream is to use server virtualization technology. However, from these new technology organizations will not necessarily get the expected benefits because each one has its own level of organizational complexity and abilities to accept changes. The researcher investigated key factors affecting the adoption of virtualization technology through two phases. In phase I, the researcher reviewed literature and then applied the dimensions of "Information Systems Success Model" (ISSM) to generalize the factors affecting the adoption of virtualization technology to be the preliminary theoretical framework and develop a questionnaire; in phase II, a three-round Delphi Method was used to integrate the opinions of experts from related fields which were then gradually converged in order to obtain a stable and objective questionnaire of key factors so that these results were expected to provide references for organizations' adoption of server virtualization technology and future studies.
Pyglidein - A Simple HTCondor Glidein Service
NASA Astrophysics Data System (ADS)
Schultz, D.; Riedel, B.; Merino, G.
2017-10-01
A major challenge for data processing and analysis at the IceCube Neutrino Observatory presents itself in connecting a large set of individual clusters together to form a computing grid. Most of these clusters do not provide a “standard” grid interface. Using a local account on each submit machine, HTCondor glideins can be submitted to virtually any type of scheduler. The glideins then connect back to a main HTCondor pool, where jobs can run normally with no special syntax. To respond to dynamic load, a simple server advertises the number of idle jobs in the queue and the resources they request. The submit script can query this server to optimize glideins to what is needed, or not submit if there is no demand. Configuring HTCondor dynamic slots in the glideins allows us to efficiently handle varying memory requirements as well as whole-node jobs. One step of the IceCube simulation chain, photon propagation in the ice, heavily relies on GPUs for faster execution. Therefore, one important requirement for any workload management system in IceCube is to handle GPU resources properly. Within the pyglidein system, we have successfully configured HTCondor glideins to use any GPU allocated to it, with jobs using the standard HTCondor GPU syntax to request and use a GPU. This mechanism allows us to seamlessly integrate our local GPU cluster with remote non-Grid GPU clusters, including specially allocated resources at XSEDE supercomputers.
Software for Building Models of 3D Objects via the Internet
NASA Technical Reports Server (NTRS)
Schramer, Tim; Jensen, Jeff
2003-01-01
The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.
Perspectives of IT Professionals on Employing Server Virtualization Technologies
ERIC Educational Resources Information Center
Sligh, Darla
2010-01-01
Server virtualization enables a physical computer to support multiple applications logically by decoupling the application from the hardware layer, thereby reducing operational costs and competitive in delivering IT services to their enterprise organizations. IT organizations continually examine the efficiency of their internal IT systems and…
Kim, Dong Seong; Park, Jong Sou
2014-01-01
It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732
Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.
Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580
Enhanced networked server management with random remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2003-08-01
In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.
Parallel Computing Using Web Servers and "Servlets".
ERIC Educational Resources Information Center
Lo, Alfred; Bloor, Chris; Choi, Y. K.
2000-01-01
Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…
ERIC Educational Resources Information Center
Sukwong, Orathai
2013-01-01
Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…
KNBD: A Remote Kernel Block Server for Linux
NASA Technical Reports Server (NTRS)
Becker, Jeff
1999-01-01
I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.
NASA Technical Reports Server (NTRS)
Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.
2012-01-01
Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.
Using Mosix for Wide-Area Compuational Resources
Maddox, Brian G.
2004-01-01
One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.
Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities
NASA Astrophysics Data System (ADS)
Garzoglio, Gabriele
2012-12-01
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.
Investigation of storage options for scientific computing on Grid and Cloud facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less
Virtualization for the LHCb Online system
NASA Astrophysics Data System (ADS)
Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko
2011-12-01
Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.
A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less
New virtual laboratories presenting advanced motion control concepts
NASA Astrophysics Data System (ADS)
Goubej, Martin; Krejčí, Alois; Reitinger, Jan
2015-11-01
The paper deals with development of software framework for rapid generation of remote virtual laboratories. Client-server architecture is chosen in order to employ real-time simulation core which is running on a dedicated server. Ordinary web browser is used as a final renderer to achieve hardware independent solution which can be run on different target platforms including laptops, tablets or mobile phones. The provided toolchain allows automatic generation of the virtual laboratory source code from the configuration file created in the open- source Inkscape graphic editor. Three virtual laboratories presenting advanced motion control algorithms have been developed showing the applicability of the proposed approach.
Virtual network computing: cross-platform remote display and collaboration software.
Konerding, D E
1999-04-01
VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.
Towards real-time photon Monte Carlo dose calculation in the cloud
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-07
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Intelligent Virtual Station (IVS)
NASA Technical Reports Server (NTRS)
2002-01-01
The Intelligent Virtual Station (IVS) is enabling the integration of design, training, and operations capabilities into an intelligent virtual station for the International Space Station (ISS). A viewgraph of the IVS Remote Server is presented.
NASA Astrophysics Data System (ADS)
Shen, Tzu-Chiang; Ovando, Nicolás.; Bartsch, Marcelo; Simmond, Max; Vélez, Gastón; Robles, Manuel; Soto, Rubén.; Ibsen, Jorge; Saldias, Christian
2012-09-01
ALMA is the first astronomical project being constructed and operated under industrial approach due to the huge amount of elements involved. In order to achieve the maximum through put during the engineering and scientific commissioning phase, several production lines have been established to work in parallel. This decision required modification in the original system architecture in which all the elements are controlled and operated within a unique Standard Test Environment (STE). The advance in the network industry and together with the maturity of virtualization paradigm allows us to provide a solution which can replicate the STE infrastructure without changing their network address definition. This is only possible with Virtual Routing and Forwarding (VRF) and Virtual LAN (VLAN) concepts. The solution allows dynamic reconfiguration of antennas and other hardware across the production lines with minimum time and zero human intervention in the cabling. We also push the virtualization even further, classical rack mount servers are being replaced and consolidated by blade servers. On top of them virtualized server are centrally administrated with VMWare ESX. Hardware costs and system administration effort will be reduced considerably. This mechanism has been established and operated successfully during the last two years. This experience gave us confident to propose a solution to divide the main operation array into subarrays using the same concept which will introduce huge flexibility and efficiency for ALMA operation and eventually may simplify the complexity of ALMA core observing software since there will be no need to deal with subarrays complexity at software level.
Visualization of N-body Simulations in Virtual Worlds
NASA Astrophysics Data System (ADS)
Knop, Robert A.; Ames, J.; Djorgovski, G.; Farr, W.; Hut, P.; Johnson, A.; McMillan, S.; Nakasone, A.; Vesperini, E.
2010-01-01
We report on work to use virtual worlds for visualizing the results of N-body calculations, on three levels. First, we have written a demonstration 3-body solver entirely in the scripting language of the popularly used virtual world Second Life. Second, we have written a physics module for the open source virtual world OpenSim that performs N-body calculations as the physics engine for the server, allowing natural 3-d visualization of the solution as the solution is being performed. Finally, we give an initial report on the potential use of virtual worlds to visualize calculations which have previously been performed, or which are being performed in other processes and reported to the virtual world server. This work has been performed as part of the Meta-Institute of Computational Astrophysics (MICA). http://www.mica-vw.org
NASA Astrophysics Data System (ADS)
Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.
2009-12-01
Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem architectures using PxFS and QFS were found to be incompatible with our software architecture, so sharing of data between systems is accomplished via traditional NFS. Linux was found to be limited in terms of deployment flexibility and consistency between versions. Despite the experimentation with various technologies, our current virtualized architecture is stable to the point of an average daily real time data return rate of 92.34% over the entire lifetime of the project to date.
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
RSAT 2015: Regulatory Sequence Analysis Tools
Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A.; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M.; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques
2015-01-01
RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. PMID:25904632
A virtual university Web system for a medical school.
Séka, L P; Duvauferrier, R; Fresnel, A; Le Beux, P
1998-01-01
This paper describes a Virtual Medical University Web Server. This project started in 1994 by the development of the French Radiology Server. The main objective of our Medical Virtual University is to offer not only an initial training (for students) but also the Continuing Professional Education (for practitioners). Our system is based on electronic textbooks, clinical cases (around 4000) and a medical knowledge base called A.D.M. ("Aide au Diagnostic Medical"). We have indexed all electronic textbooks and clinical cases according to the ADM base in order to facilitate the navigation on the system. This system base is supported by a relational database management system. The Virtual Medical University, available on the Web Internet, is presently in the process of external evaluations.
Utilization of Virtual Server Technology in Mission Operations
NASA Technical Reports Server (NTRS)
Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.
2010-01-01
Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.
Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.
Troshin, Peter V; Procter, James B; Barton, Geoffrey J
2011-07-15
JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.
Usage of Thin-Client/Server Architecture in Computer Aided Education
ERIC Educational Resources Information Center
Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit
2014-01-01
With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…
NASA Astrophysics Data System (ADS)
Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, D.; Gill, R.; Sinno, S. S.; Shen, Y.; Carriere, L. E.; Brieger, L.; Moore, R.; Rajasekar, A.; Schroeder, W.; Wan, M.
2011-12-01
Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service. A virtual climate data server is an OAIS-compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have developed prototype vCDSs to manage NetCDF, HDF, and GeoTIF data products. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA's Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into these virtualized resources, multiple vCDSs can use iRODS's federation and realized object capabilities to create an integrated ecosystem of data servers that can scale and adapt to changing requirements. This approach enables platform- or software-as-a-service deployment of the vCDSs and allows the NCCS to offer virtualization-as-a-service, a capacity to respond in an agile way to new customer requests for data services, and a path for migrating existing services into the cloud. We have registered MODIS Atmosphere data products in a vCDS that contains 54 million registered files, 630TB of data, and over 300 million metadata values. We are now assembling IPCC AR5 data into a production vCDS that will provide the platform upon which NCCS's Earth System Grid (ESG) node publishes to the extended science community. In this talk, we describe our approach, experiences, lessons learned, and plans for the future.
High-Performance Tiled WMS and KML Web Server
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2007-01-01
This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.
[The virtual university in medicine. Context, concepts, specifications, users' manual].
Duvauferrier, R; Séka, L P; Rolland, Y; Rambeau, M; Le Beux, P; Morcet, N
1998-09-01
The widespread use of Web servers, with the emergence of interactive functions and the possibility of credit card payment via Internet, together with the requirement for continuing education and the subsequent need for a computer to link into the health care network have incited the development of a virtual university scheme on Internet. The Virtual University of Radiology is not only a computer-assisted teaching tool with a set of attractive features, but also a powerful engine allowing the organization, distribution and control of medical knowledge available in the www.server. The scheme provides patient access to general information, a secretary's office for enrollment and the Virtual University itself, with its library, image database, a forum for subspecialties and clinical case reports, an evaluation module and various guides and help tools for diagnosis, prescription and indexing. Currently the Virtual University of Radiology offers diagnostic imaging, but can also be used by other specialties and for general practice.
Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments
NASA Astrophysics Data System (ADS)
Pretto, N.; Poiesi, F.
2017-11-01
We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.
ICCE/ICCAI 2000 Full & Short Papers (Virtual Lab/Classroom/School).
ERIC Educational Resources Information Center
2000
This document contains the following full and short papers on virtual laboratories, classrooms, and schools from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "A Collaborative Learning Support System Based on Virtual Environment Server for Multiple…
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
2004-03-22
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
Designing a Virtual-Reality-Based, Gamelike Math Learning Environment
ERIC Educational Resources Information Center
Xu, Xinhao; Ke, Fengfeng
2016-01-01
This exploratory study examined the design issues related to a virtual-reality-based, gamelike learning environment (VRGLE) developed via OpenSimulator, an open-source virtual reality server. The researchers collected qualitative data to examine the VRGLE's usability, playability, and content integration for math learning. They found it important…
ERIC Educational Resources Information Center
Bolch, Matt
2009-01-01
School districts across the country have always had to do more with less. Funding goes only so far, leaving administrators and IT staff to find innovative ways to save money while maintaining a high level of academic quality. Creating virtual servers accomplishes both tasks, district technology personnel say. Virtual environments not only allow…
RSAT 2015: Regulatory Sequence Analysis Tools.
Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques
2015-07-01
RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Virtual Computing Laboratories: A Case Study with Comparisons to Physical Computing Laboratories
ERIC Educational Resources Information Center
Burd, Stephen D.; Seazzu, Alessandro F.; Conway, Christopher
2009-01-01
Current technology enables schools to provide remote or virtual computing labs that can be implemented in multiple ways ranging from remote access to banks of dedicated workstations to sophisticated access to large-scale servers hosting virtualized workstations. This paper reports on the implementation of a specific lab using remote access to…
ERIC Educational Resources Information Center
Schaffhauser, Dian
2012-01-01
Half of servers in higher ed are virtualized. But that number's not high enough for Link Alander, interim vice chancellor and CIO at the Lone Star College System (Texas). He aspires to see 100 percent of the system's infrastructure requirements delivered as IT services from its own virtualized data centers or other cloud-based operators. Back in…
SciServer Compute brings Analysis to Big Data in the Cloud
NASA Astrophysics Data System (ADS)
Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara
2016-06-01
SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.
Beating the tyranny of scale with a private cloud configured for Big Data
NASA Astrophysics Data System (ADS)
Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag
2015-04-01
The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.
Things That Go "Bump": in the Virtual Night.
ERIC Educational Resources Information Center
Fore, Julie A.
1997-01-01
Introduces concepts of server security and includes articles and sidebars of firsthand accounts of consequences of not devoting enough time to security measures. Outlines the following factors to consider when evaluating a server's risk potential: confidentiality/reproducibility of the data; complexity of the system; backup system and hardware…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, K; Freeman, J; Zavalkovskiy, B
Purpose: Situated 20 miles from 5 major fault lines in California’s Bay Area, Stanford Universityhas a critical need for IT infrastructure planning to handle the high probability devastating earthquakes. Recently, a multi-million dollar project has been underway to overhaul Stanford’s radiation oncology information systems, maximizing planning system performance and providing true disaster recovery abilities. An overview of the project will be given with particular focus on lessons learned throughout the build. Methods: In this implementation, two isolated external datacenters provide geographical redundancy to Stanford’s main campus datacenter. Real-time mirroring is made of all data stored to our serial attached networkmore » (SAN) storage. In each datacenter, hardware/software virtualization was heavily implemented to maximize server efficiency and provide a robust mechanism to seamlessly migrate users in the event of an earthquake. System performance is routinely assessed through the use of virtualized data robots, able to log in to the system at scheduled times, perform routine planning tasks and report timing results to a performance dashboard. A substantial dose calculation framework (608 CPU cores) has been constructed as part of the implementation. Results: Migration to a virtualized server environment with a high performance SAN has resulted in up to a 45% speed up of common treatment planning tasks. Switching to a 608 core DCF has resulted in a 280% speed increase in dose calculations. Server tuning was found to further improved read/write performance by 20%. Disaster recovery tests are carried out quarterly and, although successful, remain time consuming to perform and verify functionality. Conclusion: Achieving true disaster recovery capabilities is possible through server virtualization, support from skilled IT staff and leadership. Substantial performance improvements are also achievable through careful tuning of server resources and disk read/write operations. Developing a streamlined method to comprehensively test failover is a key requirement to the system’s success.« less
DPM — efficient storage in diverse environments
NASA Astrophysics Data System (ADS)
Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio
2014-06-01
Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.
Virtualizing access to scientific applications with the Application Hosting Environment
NASA Astrophysics Data System (ADS)
Zasada, S. J.; Coveney, P. V.
2009-12-01
The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.
Running GUI Applications on Peregrine from OSX | High-Performance Computing
Learn how to use Virtual Network Computing to access a Linux graphical desktop environment on Peregrine local port (on, e.g., your laptop), starts a VNC server process that manages a virtual desktop on your virtual desktop. This is persistent, so remember it-you will use this password whenever accessing
On delay adjustment for dynamic load balancing in distributed virtual environments.
Deng, Yunhua; Lau, Rynson W H
2012-04-01
Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead.
GRAMM-X public web server for protein–protein docking
Tovchigrechko, Andrey; Vakser, Ilya A.
2006-01-01
Protein docking software GRAMM-X and its web interface () extend the original GRAMM Fast Fourier Transformation methodology by employing smoothed potentials, refinement stage, and knowledge-based scoring. The web server frees users from complex installation of database-dependent parallel software and maintaining large hardware resources needed for protein docking simulations. Docking problems submitted to GRAMM-X server are processed by a 320 processor Linux cluster. The server was extensively tested by benchmarking, several months of public use, and participation in the CAPRI server track. PMID:16845016
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
Construction and application of Red5 cluster based on OpenStack
NASA Astrophysics Data System (ADS)
Wang, Jiaqing; Song, Jianxin
2017-08-01
With the application and development of cloud computing technology in various fields, the resource utilization rate of the data center has been improved obviously, and the system based on cloud computing platform has also improved the expansibility and stability. In the traditional way, Red5 cluster resource utilization is low and the system stability is poor. This paper uses cloud computing to efficiently calculate the resource allocation ability, and builds a Red5 server cluster based on OpenStack. Multimedia applications can be published to the Red5 cloud server cluster. The system achieves the flexible construction of computing resources, but also greatly improves the stability of the cluster and service efficiency.
The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud
Karimi, Kamran; Vize, Peter D.
2014-01-01
As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782
Virtual shelves in a digital library: a framework for access to networked information sources.
Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E
1995-01-01
Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.
Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL
NASA Astrophysics Data System (ADS)
Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong
2011-12-01
We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.
Assessment of physical server reliability in multi cloud computing system
NASA Astrophysics Data System (ADS)
Kalyani, B. J. D.; Rao, Kolasani Ramchand H.
2018-04-01
Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.
Web Program for Development of GUIs for Cluster Computers
NASA Technical Reports Server (NTRS)
Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward
2003-01-01
WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.
A Laboratory for Characterizing the Efficacy of Moving Target Defense
2016-10-25
of William and Mary are developing a scalable, dynamic, adaptive security system that combines virtualization , emulation, and mutable network...goal with the resource constraints of a small number of servers, and making virtual nodes “real enough” from the view of attackers. Unfortunately, with...we at College of William and Mary are developing a scalable, dynamic, adaptive security system that combines virtualization , emulation, and mutable
A self-configuring control system for storage and computing departments at INFN-CNAF Tierl
NASA Astrophysics Data System (ADS)
Gregori, Daniele; Dal Pra, Stefano; Ricci, Pier Paolo; Pezzi, Michele; Prosperini, Andrea; Sapunenko, Vladimir
2015-05-01
The storage and farming departments at the INFN-CNAF Tier1[1] manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers between LHC Tiers sites (Tier0, Tier1 and Tier2) via GridFTP cluster and Xrootd protocol and finally the writing and reading data operations on magnetic tape backend. One of the most important and essential point in order to get a reliable service is a control system that can warn if problems arise and which is able to perform automatic recovery operations in case of service interruptions or major failures. Moreover, during daily operations the configurations can change, i.e. if the GPFS cluster nodes roles can be modified and therefore the obsolete nodes must be removed from the control system production, and the new servers should be added to the ones that are already present. The manual management of all these changes is an operation that can be somewhat difficult in case of several changes, it can also take a long time and is easily subject to human error or misconfiguration. For these reasons we have developed a control system with the feature of self-configure itself if any change occurs. Currently, this system has been in production for about a year at the INFN-CNAF Tier1 with good results and hardly any major drawback. There are three major key points in this system. The first is a software configurator service (e.g. Quattor or Puppet) for the servers machines that we want to monitor with the control system; this service must ensure the presence of appropriate sensors and custom scripts on the nodes to check and should be able to install and update software packages on them. The second key element is a database containing information, according to a suitable format, on all the machines in production and able to provide for each of them the principal information such as the type of hardware, the network switch to which the machine is connected, if the machine is real (physical) or virtual, the possible hypervisor to which it belongs and so on. The last key point is a control system software (in our implementation we choose the Nagios software), capable of assessing the status of the servers and services, and that can attempt to restore the working state, restart or inhibit software services and send suitable alarm messages to the site administrators. The integration of these three elements was made by appropriate scripts and custom implementation that allow the self-configuration of the system according to a decisional logic and the whole combination of all the above-mentioned components will be deeply discussed in this paper.
NASA Technical Reports Server (NTRS)
Lyle, Stacey D.
2009-01-01
A software package that has been designed to allow authentication for determining if the rover(s) is/are within a set of boundaries or a specific area to access critical geospatial information by using GPS signal structures as a means to authenticate mobile devices into a network wirelessly and in real-time has been developed. The advantage lies in that the system only allows those with designated geospatial boundaries or areas into the server. The Geospatial Authentication software has two parts Server and Client. The server software is a virtual private network (VPN) developed in Linux operating system using Perl programming language. The server can be a stand-alone VPN server or can be combined with other applications and services. The client software is a GUI Windows CE software, or Mobile Graphical Software, that allows users to authenticate into a network. The purpose of the client software is to pass the needed satellite information to the server for authentication.
Cloud Computing with iPlant Atmosphere.
McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos
2013-10-15
Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.
Mathematical defense method of networked servers with controlled remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2006-05-01
The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.
Virtual shelves in a digital library: a framework for access to networked information sources.
Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E
1995-01-01
OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554
The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.
Karimi, Kamran; Vize, Peter D
2014-01-01
As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.
sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.
Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael
2017-01-01
High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.
Request queues for interactive clients in a shared file system of a parallel computing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin
Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less
Remotely accessible laboratory for MEMS testing
NASA Astrophysics Data System (ADS)
Sivakumar, Ganapathy; Mulsow, Matthew; Melinger, Aaron; Lacouture, Shelby; Dallas, Tim E.
2010-02-01
We report on the construction of a remotely accessible and interactive laboratory for testing microdevices (aka: MicroElectroMechancial Systems - MEMS). Enabling expanded utilization of microdevices for research, commercial, and educational purposes is very important for driving the creation of future MEMS devices and applications. Unfortunately, the relatively high costs associated with MEMS devices and testing infrastructure makes widespread access to the world of MEMS difficult. The creation of a virtual lab to control and actuate MEMS devices over the internet helps spread knowledge to a larger audience. A host laboratory has been established that contains a digital microscope, microdevices, controllers, and computers that can be logged into through the internet. The overall layout of the tele-operated MEMS laboratory system can be divided into two major parts: the server side and the client side. The server-side is present at Texas Tech University, and hosts a server machine that runs the Linux operating system and is used for interfacing the MEMS lab with the outside world via internet. The controls from the clients are transferred to the lab side through the server interface. The server interacts with the electronics required to drive the MEMS devices using a range of National Instruments hardware and LabView Virtual Instruments. An optical microscope (100 ×) with a CCD video camera is used to capture images of the operating MEMS. The server broadcasts the live video stream over the internet to the clients through the website. When the button is pressed on the website, the MEMS device responds and the video stream shows the movement in close to real time.
ERIC Educational Resources Information Center
Waters, John K.
2009-01-01
John Abdelmalak, director of technology for the School District of the Chathams, was pretty sure it was time to jump on the virtualization bandwagon last year when he invited Dell to conduct a readiness assessment of his district's servers. When he saw just how little of their capacity was being used, he lost all doubt. Abdelmalak is one of many…
Mixed Methods for Mixed Reality: Understanding Users' Avatar Activities in Virtual Worlds
ERIC Educational Resources Information Center
Feldon, David F.; Kafai, Yasmin B.
2008-01-01
This paper examines the use of mixed methods for analyzing users' avatar-related activities in a virtual world. Server logs recorded keystroke-level activity for 595 participants over a six-month period in Whyville.net, an informal science website. Participants also completed surveys and participated in interviews regarding their experiences.…
Cloud services for the Fermilab scientific stakeholders
Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...
2015-12-23
As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less
Cloud services for the Fermilab scientific stakeholders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, S.; Garzoglio, G.; Mhashilkar, P.
As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less
SeMPI: a genome-based secondary metabolite prediction and identification web server.
Zierep, Paul F; Padilla, Natàlia; Yonchev, Dimitar G; Telukunta, Kiran K; Klementz, Dennis; Günther, Stefan
2017-07-03
The secondary metabolism of bacteria, fungi and plants yields a vast number of bioactive substances. The constantly increasing amount of published genomic data provides the opportunity for an efficient identification of gene clusters by genome mining. Conversely, for many natural products with resolved structures, the encoding gene clusters have not been identified yet. Even though genome mining tools have become significantly more efficient in the identification of biosynthetic gene clusters, structural elucidation of the actual secondary metabolite is still challenging, especially due to as yet unpredictable post-modifications. Here, we introduce SeMPI, a web server providing a prediction and identification pipeline for natural products synthesized by polyketide synthases of type I modular. In order to limit the possible structures of PKS products and to include putative tailoring reactions, a structural comparison with annotated natural products was introduced. Furthermore, a benchmark was designed based on 40 gene clusters with annotated PKS products. The web server of the pipeline (SeMPI) is freely available at: http://www.pharmaceutical-bioinformatics.de/sempi. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
DMINDA: an integrated web server for DNA motif identification and analyses
Ma, Qin; Zhang, Hanyuan; Mao, Xizeng; Zhou, Chuan; Liu, Bingqiang; Chen, Xin; Xu, Ying
2014-01-01
DMINDA (DNA motif identification and analyses) is an integrated web server for DNA motif identification and analyses, which is accessible at http://csbl.bmb.uga.edu/DMINDA/. This web site is freely available to all users and there is no login requirement. This server provides a suite of cis-regulatory motif analysis functions on DNA sequences, which are important to elucidation of the mechanisms of transcriptional regulation: (i) de novo motif finding for a given set of promoter sequences along with statistical scores for the predicted motifs derived based on information extracted from a control set, (ii) scanning motif instances of a query motif in provided genomic sequences, (iii) motif comparison and clustering of identified motifs, and (iv) co-occurrence analyses of query motifs in given promoter sequences. The server is powered by a backend computer cluster with over 150 computing nodes, and is particularly useful for motif prediction and analyses in prokaryotic genomes. We believe that DMINDA, as a new and comprehensive web server for cis-regulatory motif finding and analyses, will benefit the genomic research community in general and prokaryotic genome researchers in particular. PMID:24753419
Cluster tool solution for fabrication and qualification of advanced photomasks
NASA Astrophysics Data System (ADS)
Schaetz, Thomas; Hartmann, Hans; Peter, Kai; Lalanne, Frederic P.; Maurin, Olivier; Baracchi, Emanuele; Miramond, Corinne; Brueck, Hans-Juergen; Scheuring, Gerd; Engel, Thomas; Eran, Yair; Sommer, Karl
2000-07-01
The reduction of wavelength in optical lithography, phase shift technology and optical proximity correction (OPC), requires a rapid increase in cost effective qualification of photomasks. The knowledge about CD variation, loss of pattern fidelity especially for OPC pattern and mask defects concerning the impact on wafer level is becoming a key issue for mask quality assessment. As part of the European Community supported ESPRIT projection 'Q-CAP', a new cluster concept has been developed, which allows the combination of hardware tools as well as software tools via network communication. It is designed to be open for any tool manufacturer and mask hose. The bi-directional network access allows the exchange of all relevant mask data including grayscale images, measurement results, lithography parameters, defect coordinates, layout data, process data etc. and its storage to a SQL database. The system uses SEMI format descriptions as well as standard network hardware and software components for the client server communication. Each tool is used mainly to perform its specific application without using expensive time to perform optional analysis, but the availability of the database allows each component to share the full data ste gathered by all components. Therefore, the cluster can be considered as one single virtual tool. The paper shows the advantage of the cluster approach, the benefits of the tools linked together already, and a vision of a mask house in the near future.
System-Level Virtualization Research at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J
2010-01-01
System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less
The EBI SRS server-new features.
Zdobnov, Evgeny M; Lopez, Rodrigo; Apweiler, Rolf; Etzold, Thure
2002-08-01
Here we report on recent developments at the EBI SRS server (http://srs.ebi.ac.uk). SRS has become an integration system for both data retrieval and sequence analysis applications. The EBI SRS server is a primary gateway to major databases in the field of molecular biology produced and supported at EBI as well as European public access point to the MEDLINE database provided by US National Library of Medicine (NLM). It is a reference server for latest developments in data and application integration. The new additions include: concept of virtual databases, integration of XML databases like the Integrated Resource of Protein Domains and Functional Sites (InterPro), Gene Ontology (GO), MEDLINE, Metabolic pathways, etc., user friendly data representation in 'Nice views', SRSQuickSearch bookmarklets. SRS6 is a licensed product of LION Bioscience AG freely available for academics. The EBI SRS server (http://srs.ebi.ac.uk) is a free central resource for molecular biology data as well as a reference server for the latest developments in data integration.
Bio-inspired diversity for increasing attacker workload
NASA Astrophysics Data System (ADS)
Kuhn, Stephen
2014-05-01
Much of the traffic in modern computer networks is conducted between clients and servers, rather than client-toclient. As a result, servers represent a high-value target for collection and analysis of network traffic. As they reside at a single network location (i.e. IP/MAC address) for long periods of time. Servers present a static target for surveillance, and a unique opportunity to observe the network traffic. Although servers present a heightened value for attackers, the security community as a whole has shifted more towards protecting clients in recent years leaving a gap in coverage. In addition, servers typically remain active on networks for years, potentially decades. This paper builds on previous work that demonstrated a proof of concept leveraging existing technology for increasing attacker workload. Here we present our clean slate approach to increasing attacker workload through a novel hypervisor and micro-kernel, utilizing next generation virtualization technology to create synthetic diversity of the server's presence including the hardware components.
ERIC Educational Resources Information Center
Korolov, Maria
2011-01-01
Unhappy with conditions in Second Life, educators are migrating to a developing virtual world that offers them greater autonomy and a safer platform for their students at far less a cost. OpenSimulator is an open source virtual world platform that schools can run for free on their own servers or can get cheaply and quickly--the space can be up and…
Graph and Network for Model Elicitation (GNOME Phase 2)
2013-02-01
10 3.3 GNOME UI Components for NOEM Web Client...20 Figure 17: Sampling in Web -client...the web -client). The server-side service can run and generate data asynchronously, allowing a cluster of servers to run the sampling. Also, a
System and Method for Providing a Climate Data Persistence Service
NASA Technical Reports Server (NTRS)
Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)
2018-01-01
A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.
A Virtual Research Environment for a Secondary Ion Mass Spectrometer (SIMS)
NASA Astrophysics Data System (ADS)
Wiedenbeck, M.; Schäfer, L.; Klump, J.; Galkin, A.
2013-12-01
Overview: This poster describes the development of a Virtual Research Environment for the Secondary Ion Mass Spectrometer (SIMS) at GFZ Potsdam. Background: Secondary Ion Mass Spectrometers (SIMS) are extremely sensitive instruments for analyzing the surfaces of solid and thin film samples. These instruments are rare, expensive and experienced operators are very highly sought after. As such, measurement time is a precious commodity, until now only accessible to small numbers of researchers. The challenge: The Virtual SIMS Project aims to set up a Virtual Research Environment for the operation of the CAMECA IMS 1280-HR instrument at the GFZ Potsdam. The objective of the VRE is to provide SIMS access not only to researchers locally present in Potsdam but also to scientists working with SIMS cooperation partners in e.g., South Africa, Brazil or India. The requirements: The system should address the complete spectrum of laboratory procedures - from online application for measurement time, to remote access for data acquisition to data archiving for the subsequent publication and for future reuse. The approach: The targeted Virtual SIMS Environment will consist of a: 1. Web Server running the Virtual SIMS website providing general information about the project, lab access proposal forms and calendar for the timing of project related tasks. 2. LIMS Server, responsible for scheduling procedures, data management and, if applicable, accounting and billing. 3. Remote SIMS Tool, devoted to the operation of the experiment within a remote control environment. 4. Publishing System, which supports the publication of results in cooperation with the GFZ Library services. 5. Training Simulator, which offers the opportunity to rehearse experiments and to prepare for possible events such as a power outages or interruptions to broadband services. First results: The SIMS Virtual Research Environment will be mainly based on open source software, the only exception being the CAMECA IMS 1280-HR SIMS operating under LabView. The Publishing System will be based on eSciDoc, which is already successfully used by the GFZ scientific library. For the LIMS Server we are currently testing various options. The challenge, however, is the successful integration of all the various components and, where necessary, the definition of useful interfaces between the modules.
Internet-based distributed collaborative environment for engineering education and design
NASA Astrophysics Data System (ADS)
Sun, Qiuli
2001-07-01
This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.
NASA Astrophysics Data System (ADS)
Lindholm, D. M.; Weigel, R. S.; Wilson, A.; Ware Dewolfe, A.
2009-12-01
Data analysis in the physical sciences is often plagued by the difficulty in acquiring the desired data. A great deal of work has been done in the area of metadata and data discovery, however, many such discoveries simply provide links that lead directly to a data file. Often these files are impractically large, containing more time samples or variables than desired, and are slow to access. Once these files are downloaded, format issues further complicate using the data. Some data servers have begun to address these problems by improving data virtualization and ease of use. However, these services often don't scale to large datasets. Also, the generic nature of the data models used by these servers, while providing greater flexibility, may complicate setting up such a service for data providers and limit sufficient semantics that would otherwise simplify use for clients, machine or human. The Time Series Data Server (TSDS) aims to address these problems within the limited, yet common, domain of time series data. With the simplifying assumption that all data products served are a function of time, the server can optimize for data access based on time subsets, a common use case. The server also supports requests for specific variables, which can be of type scalar, structure, or sequence. It also supports data types with higher level semantics, such as "spectrum." The TSDS is implemented using Java Servlet technology and can be dropped into any servlet container and customized for a data provider's needs. The interface is based on OPeNDAP (http://opendap.org) and conforms to the Data Acces Protocol (DAP) 2.0, a NASA standard (ESDS-RFC-004), which defines a simple HTTP request and response paradigm. Thus a TSDS server instance is a compliant OPeNDAP server that can be accessed by any OPeNDAP client or directly via RESTful web service requests. The TSDS reads the data that it serves into a common data model via the NetCDF Markup Language (NcML, http://www.unidata.ucar.edu/software/netcdf/ncml/) which enables dataset virtualization. An NcML file can expose a single file, a subset, or an aggregation of files as a single, logical dataset. With the appropriate NcML adapter, the TSDS can read data from its native format, eliminating the need for data providers to reformat their data and lowering the barrier for integration. Data can even be read via remote services which is important for enabling VxOs to be truly virtual. The TSDS provides reading, writing, and filtering capabilities through a modular framework. A collection of standard modules is available and customized modules are easy to create and integrate. This way the TSDS can read and write data in a variety of formats and apply filters to them an a manner customizable to meet the needs of both the data providers and consumers. The TSDS server is currently in use serving solar irradiance data from the LASP Interactive Solar IRradiance Datacenter (LISIRD, http://lasp.colorado.edu/lisird/), and is being introduced into the space physics virtual observatory community. The TSDS software is Open Source and available at SourceForge.
Markerless client-server augmented reality system with natural features
NASA Astrophysics Data System (ADS)
Ning, Shuangning; Sang, Xinzhu; Chen, Duo
2017-10-01
A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.
The architecture of a virtual grid GIS server
NASA Astrophysics Data System (ADS)
Wu, Pengfei; Fang, Yu; Chen, Bin; Wu, Xi; Tian, Xiaoting
2008-10-01
The grid computing technology provides the service oriented architecture for distributed applications. The virtual Grid GIS server is the distributed and interoperable enterprise application GIS architecture running in the grid environment, which integrates heterogeneous GIS platforms. All sorts of legacy GIS platforms join the grid as members of GIS virtual organization. Based on Microkernel we design the ESB and portal GIS service layer, which compose Microkernel GIS. Through web portals, portal GIS services and mediation of service bus, following the principle of SoC, we separate business logic from implementing logic. Microkernel GIS greatly reduces the coupling degree between applications and GIS platforms. The enterprise applications are independent of certain GIS platforms, and making the application developers to pay attention to the business logic. Via configuration and orchestration of a set of fine-grained services, the system creates GIS Business, which acts as a whole WebGIS request when activated. In this way, the system satisfies a business workflow directly and simply, with little or no new code.
2009-09-01
DIFFIE-HELLMAN KEY EXCHANGE .......................14 III. GHOSTNET SETUP .........................................15 A. INSTALLATION OF OPENVPN FOR...16 3. Verifying the Secure Connection ..............16 B. RUNNING OPENVPN AS A SERVER ON WINDOWS ............17 1. Creating...Generating Server and Client Keys ............20 5. Keys to Transfer to the Client ...............21 6. Configuring OpenVPN to Use Certificates
DMINDA: an integrated web server for DNA motif identification and analyses.
Ma, Qin; Zhang, Hanyuan; Mao, Xizeng; Zhou, Chuan; Liu, Bingqiang; Chen, Xin; Xu, Ying
2014-07-01
DMINDA (DNA motif identification and analyses) is an integrated web server for DNA motif identification and analyses, which is accessible at http://csbl.bmb.uga.edu/DMINDA/. This web site is freely available to all users and there is no login requirement. This server provides a suite of cis-regulatory motif analysis functions on DNA sequences, which are important to elucidation of the mechanisms of transcriptional regulation: (i) de novo motif finding for a given set of promoter sequences along with statistical scores for the predicted motifs derived based on information extracted from a control set, (ii) scanning motif instances of a query motif in provided genomic sequences, (iii) motif comparison and clustering of identified motifs, and (iv) co-occurrence analyses of query motifs in given promoter sequences. The server is powered by a backend computer cluster with over 150 computing nodes, and is particularly useful for motif prediction and analyses in prokaryotic genomes. We believe that DMINDA, as a new and comprehensive web server for cis-regulatory motif finding and analyses, will benefit the genomic research community in general and prokaryotic genome researchers in particular. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Energy Efficiency in Small Server Rooms: Field Surveys and Findings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Iris; Greenberg, Steve; Mahdavi, Roozbeh
Fifty-seven percent of US servers are housed in server closets, server rooms, and localized data centers, in what are commonly referred to as small server rooms, which comprise 99percent of all server spaces in the US. While many mid-tier and enterprise-class data centers are owned by large corporations that consider energy efficiency a goal to minimize business operating costs, small server rooms typically are not similarly motivated. They are characterized by decentralized ownership and management and come in many configurations, which creates a unique set of efficiency challenges. To develop energy efficiency strategies for these spaces, we surveyed 30 smallmore » server rooms across eight institutions, and selected four of them for detailed assessments. The four rooms had Power Usage Effectiveness (PUE) values ranging from 1.5 to 2.1. Energy saving opportunities ranged from no- to low-cost measures such as raising cooling set points and better airflow management, to more involved but cost-effective measures including server consolidation and virtualization, and dedicated cooling with economizers. We found that inefficiencies mainly resulted from organizational rather than technical issues. Because of the inherent space and resource limitations, the most effective measure is to operate servers through energy-efficient cloud-based services or well-managed larger data centers, rather than server rooms. Backup power requirement, and IT and cooling efficiency should be evaluated to minimize energy waste in the server space. Utility programs are instrumental in raising awareness and spreading technical knowledge on server operation, and the implementation of energy efficiency measures in small server rooms.« less
Integrated disease management pilot for diabetes.
Clarke, David; Rowe, Ian; Gribben, Barry; Brimacombe, Phil; Engel, Thorsten
2002-01-01
A New Zealand diabetes program integrates management systems for hospitals and physician practices through a shared integrated care server (ICS), which supports collaborative patient management, virtual consults, and clinical feedback.
Marsh, James; Glencross, Mashhuda; Pettifer, Steve; Hubbold, Roger
2006-01-01
Network architectures for collaborative virtual reality have traditionally been dominated by client-server and peer-to-peer approaches, with peer-to-peer strategies typically being favored where minimizing latency is a priority, and client-server where consistency is key. With increasingly sophisticated behavior models and the demand for better support for haptics, we argue that neither approach provides sufficient support for these scenarios and, thus, a hybrid architecture is required. We discuss the relative performance of different distribution strategies in the face of real network conditions and illustrate the problems they face. Finally, we present an architecture that successfully meets many of these challenges and demonstrate its use in a distributed virtual prototyping application which supports simultaneous collaboration for assembly, maintenance, and training applications utilizing haptics.
Frank, M S; Dreyer, K
2001-06-01
We describe a virtual web site hosting technology that enables educators in radiology to emblazon and make available for delivery on the world wide web their own interactive educational content, free from dependencies on in-house resources and policies. This suite of technologies includes a graphically oriented software application, designed for the computer novice, to facilitate the input, storage, and management of domain expertise within a database system. The database stores this expertise as choreographed and interlinked multimedia entities including text, imagery, interactive questions, and audio. Case-based presentations or thematic lectures can be authored locally, previewed locally within a web browser, then uploaded at will as packaged knowledge objects to an educator's (or department's) personal web site housed within a virtual server architecture. This architecture can host an unlimited number of unique educational web sites for individuals or departments in need of such service. Each virtual site's content is stored within that site's protected back-end database connected to Internet Information Server (Microsoft Corp, Redmond WA) using a suite of Active Server Page (ASP) modules that incorporate Microsoft's Active Data Objects (ADO) technology. Each person's or department's electronic teaching material appears as an independent web site with different levels of access--controlled by a username-password strategy--for teachers and students. There is essentially no static hypertext markup language (HTML). Rather, all pages displayed for a given site are rendered dynamically from case-based or thematic content that is fetched from that virtual site's database. The dynamically rendered HTML is displayed within a web browser in a Socratic fashion that can assess the recipient's current fund of knowledge while providing instantaneous user-specific feedback. Each site is emblazoned with the logo and identification of the participating institution. Individuals with teacher-level access can use a web browser to upload new content as well as manage content already stored on their virtual site. Each virtual site stores, collates, and scores participants' responses to the interactive questions posed on line. This virtual web site strategy empowers the educator with an end-to-end solution for creating interactive educational content and hosting that content within the educator's personalized and protected educational site on the world wide web, thus providing a valuable outlet that can magnify the impact of his or her talents and contributions.
Dynamic Extension of a Virtualized Cluster by using Cloud Resources
NASA Astrophysics Data System (ADS)
Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter
2012-12-01
The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riot, V.
2009-06-05
The software is a modidication to the Mantis BT V1.5 open source application provided by the mantis BT group to support cluster web servers. It also provides various cosmetic modifications used a LLNL.
Liu, Rebecca; Hooker, Neal H; Parasidis, Efthimios; Simons, Christopher T
2017-03-01
The "all-natural" label is used extensively in the United States. At many point-of-purchase locations, employed servers provide food samples and call out specific label information to influence consumers' purchase decisions. Despite these ubiquitous practices, it is unclear what information is conveyed to consumers by the all-natural label or how it impacts judgments of perceived food quality, nutritional content, and acceptance. We used a novel approach incorporating immersive technology to simulate a virtual in-store sampling scenario where consumers were asked by a server to evaluate identical products with only one being labeled all-natural. Another condition evaluated the impact of the in-store server additionally emphasizing the all-natural status of one sample. Results indicated the all-natural label significantly improved consumer's perception of product quality and nutritional content, but not liking or willingness to pay, when compared to the regular sample. With the simple emphasis of the all-natural claim by the in-store server, these differences in quality and nutritional content became even more pronounced, and willingness to pay increased significantly by an average of 8%. These results indicate that in a virtual setting consistent with making food purchases, an all-natural front-of-pack label improves consumer perceptions of product quality and nutritional content. In addition, information conveyed to consumers by employed servers has a further, substantial impact on these variables suggesting that consumers are highly susceptible to social influence at the point of purchase. © 2017 Institute of Food Technologists®.
Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil
2012-06-15
A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.
ERIC Educational Resources Information Center
Sylvia, Margaret
1993-01-01
Describes one college library's experience with a gateway for dial-in access to its CD-ROM network to increase access to automated index searching for students off-campus. Hardware and software choices are discussed in terms of access, reliability, affordability, and ease of use. Installation problems are discussed, and an appendix lists product…
Real Students and Virtual Field Trips
NASA Astrophysics Data System (ADS)
de Paor, D. G.; Whitmeyer, S. J.; Bailey, J. E.; Schott, R. C.; Treves, R.; Scientific Team Of Www. Digitalplanet. Org
2010-12-01
Field trips have always been one of the major attractions of geoscience education, distinguishing courses in geology, geography, oceanography, etc., from laboratory-bound sciences such as nuclear physics or biochemistry. However, traditional field trips have been limited to regions with educationally useful exposures and to student populations with the necessary free time and financial resources. Two-year or commuter colleges serving worker-students cannot realistically insist on completion of field assignments and even well-endowed universities cannot take students to more than a handful of the best available field localities. Many instructors have attempted to bring the field into the classroom with the aid of technology. So-called Virtual Field Trips (VFTs) cannot replace the real experience for those that experience it but they are much better than nothing at all. We have been working to create transformative improvements in VFTs using four concepts: (i) self-drive virtual vehicles that students use to navigate the virtual globe under their own control; (ii) GigaPan outcrops that reveal successively more details views of key locations; (iii) virtual specimens scanned from real rocks, minerals, and fossils; and (iv) embedded assessment via logging of student actions. Students are represented by avatars of their own choosing and travel either together in a virtual field vehicle, or separately. When they approach virtual outcrops, virtual specimens become collectable and can be examined using Javascript controls that change magnification and orientation. These instructional resources are being made available via a new server under the domain name www.DigitalPlanet.org. The server will log student progress and provide immediate feedback. We aim to disseminate these resources widely and welcome feedback from instructors and students.
GibbsCluster: unsupervised clustering and alignment of peptide sequences.
Andreatta, Massimo; Alvarez, Bruno; Nielsen, Morten
2017-07-03
Receptor interactions with short linear peptide fragments (ligands) are at the base of many biological signaling processes. Conserved and information-rich amino acid patterns, commonly called sequence motifs, shape and regulate these interactions. Because of the properties of a receptor-ligand system or of the assay used to interrogate it, experimental data often contain multiple sequence motifs. GibbsCluster is a powerful tool for unsupervised motif discovery because it can simultaneously cluster and align peptide data. The GibbsCluster 2.0 presented here is an improved version incorporating insertion and deletions accounting for variations in motif length in the peptide input. In basic terms, the program takes as input a set of peptide sequences and clusters them into meaningful groups. It returns the optimal number of clusters it identified, together with the sequence alignment and sequence motif characterizing each cluster. Several parameters are available to customize cluster analysis, including adjustable penalties for small clusters and overlapping groups and a trash cluster to remove outliers. As an example application, we used the server to deconvolute multiple specificities in large-scale peptidome data generated by mass spectrometry. The server is available at http://www.cbs.dtu.dk/services/GibbsCluster-2.0. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
T-RMSD: a web server for automated fine-grained protein structural classification.
Magis, Cedrik; Di Tommaso, Paolo; Notredame, Cedric
2013-07-01
This article introduces the T-RMSD web server (tree-based on root-mean-square deviation), a service allowing the online computation of structure-based protein classification. It has been developed to address the relation between structural and functional similarity in proteins, and it allows a fine-grained structural clustering of a given protein family or group of structurally related proteins using distance RMSD (dRMSD) variations. These distances are computed between all pairs of equivalent residues, as defined by the ungapped columns within a given multiple sequence alignment. Using these generated distance matrices (one per equivalent position), T-RMSD produces a structural tree with support values for each cluster node, reminiscent of bootstrap values. These values, associated with the tree topology, allow a quantitative estimate of structural distances between proteins or group of proteins defined by the tree topology. The clusters thus defined have been shown to be structurally and functionally informative. The T-RMSD web server is a free website open to all users and available at http://tcoffee.crg.cat/apps/tcoffee/do:trmsd.
T-RMSD: a web server for automated fine-grained protein structural classification
Magis, Cedrik; Di Tommaso, Paolo; Notredame, Cedric
2013-01-01
This article introduces the T-RMSD web server (tree-based on root-mean-square deviation), a service allowing the online computation of structure-based protein classification. It has been developed to address the relation between structural and functional similarity in proteins, and it allows a fine-grained structural clustering of a given protein family or group of structurally related proteins using distance RMSD (dRMSD) variations. These distances are computed between all pairs of equivalent residues, as defined by the ungapped columns within a given multiple sequence alignment. Using these generated distance matrices (one per equivalent position), T-RMSD produces a structural tree with support values for each cluster node, reminiscent of bootstrap values. These values, associated with the tree topology, allow a quantitative estimate of structural distances between proteins or group of proteins defined by the tree topology. The clusters thus defined have been shown to be structurally and functionally informative. The T-RMSD web server is a free website open to all users and available at http://tcoffee.crg.cat/apps/tcoffee/do:trmsd. PMID:23716642
Wang, Yi; Coleman-Derr, Devin; Chen, Guoping; Gu, Yong Q
2015-07-01
Genome wide analysis of orthologous clusters is an important component of comparative genomics studies. Identifying the overlap among orthologous clusters can enable us to elucidate the function and evolution of proteins across multiple species. Here, we report a web platform named OrthoVenn that is useful for genome wide comparisons and visualization of orthologous clusters. OrthoVenn provides coverage of vertebrates, metazoa, protists, fungi, plants and bacteria for the comparison of orthologous clusters and also supports uploading of customized protein sequences from user-defined species. An interactive Venn diagram, summary counts, and functional summaries of the disjunction and intersection of clusters shared between species are displayed as part of the OrthoVenn result. OrthoVenn also includes in-depth views of the clusters using various sequence analysis tools. Furthermore, OrthoVenn identifies orthologous clusters of single copy genes and allows for a customized search of clusters of specific genes through key words or BLAST. OrthoVenn is an efficient and user-friendly web server freely accessible at http://probes.pw.usda.gov/OrthoVenn or http://aegilops.wheat.ucdavis.edu/OrthoVenn. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Ku, Hao-Hsiang
2015-01-01
Nowadays, people can easily use a smartphone to get wanted information and requested services. Hence, this study designs and proposes a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oritened clustering case-based reasoning mechanism, which is called GoSIDE, based on Arduino and Open Service Gateway initative (OSGi). GoSIDE is a three-tier architecture, which is composed of Mobile Users, Application Servers and a Cloud-based Digital Convergence Server. A mobile user is with a smartphone and Kinect sensors to detect the user's Golf swing actions and to interact with iDTV. An application server is with Intelligent Golf Swing Posture Analysis Model (iGoSPAM) to check a user's Golf swing actions and to alter this user when he is with error actions. Cloud-based Digital Convergence Server is with Ontology-oriented Clustering Case-based Reasoning (CBR) for Quality of Experiences (OCC4QoE), which is designed to provide QoE services by QoE-based Ontology strategies, rules and events for this user. Furthermore, GoSIDE will automatically trigger OCC4QoE and deliver popular rules for a new user. Experiment results illustrate that GoSIDE can provide appropriate detections for Golfers. Finally, GoSIDE can be a reference model for researchers and engineers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidelberg, S T; Fitzgerald, K J; Richmond, G H
2006-01-24
There has been substantial development of the Lustre parallel filesystem prior to the configuration described below for this milestone. The initial Lustre filesystems that were deployed were directly connected to the cluster interconnect, i.e. Quadrics Elan3. That is, the clients (OSSes) and Meta-data Servers (MDS) were all directly connected to the cluster's internal high speed interconnect. This configuration serves a single cluster very well, but does not provide sharing of the filesystem among clusters. LLNL funded the development of high-efficiency ''portals router'' code by CFS (the company that develops Lustre) to enable us to move the Lustre servers to amore » GigE-connected network configuration, thus making it possible to connect to the servers from several clusters. With portals routing available, here is what changes: (1) another storage-only cluster is deployed to front the Lustre storage devices (these become the Lustre OSSes and MDS), (2) this ''Lustre cluster'' is attached via GigE connections to a large GigE switch/router cloud, (3) a small number of compute-cluster nodes are designated as ''gateway'' or ''portal router'' nodes, and (4) the portals router nodes are GigE-connected to the switch/router cloud. The Lustre configuration is then changed to reflect the new network paths. A typical example of this is a compute cluster and a related visualization cluster: the compute cluster produces the data (writes it to the Lustre filesystem), and the visualization cluster consumes some of the data (reads it from the Lustre filesystem). This process can be expanded by aggregating several collections of Lustre backend storage resources into one or more ''centralized'' Lustre filesystems, and then arranging to have several ''client'' clusters mount these centralized filesystems. The ''client clusters'' can be any combination of compute, visualization, archiving, or other types of cluster. This milestone demonstrates the operation and performance of a scaled-down version of such a large, centralized, shared Lustre filesystem concept.« less
Universal fingerprinting chip server.
Casique-Almazán, Janet; Larios-Serrato, Violeta; Olguín-Ruíz, Gabriela Edith; Sánchez-Vallejo, Carlos Javier; Maldonado-Rodríguez, Rogelio; Méndez-Tenorio, Alfonso
2012-01-01
The Virtual Hybridization approach predicts the most probable hybridization sites across a target nucleic acid of known sequence, including both perfect and mismatched pairings. Potential hybridization sites, having a user-defined minimum number of bases that are paired with the oligonucleotide probe, are first identified. Then free energy values are evaluated for each potential hybridization site, and if it has a calculated free energy of equal or higher negative value than a user-defined free energy cut-off value, it is considered as a site of high probability of hybridization. The Universal Fingerprinting Chip Applications Server contains the software for visualizing predicted hybridization patterns, which yields a simulated hybridization fingerprint that can be compared with experimentally derived fingerprints or with a virtual fingerprint arising from a different sample. The database is available for free at http://bioinformatica.homelinux.org/UFCVH/
Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr
2010-10-28
Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.
P43-S Computational Biology Applications Suite for High-Performance Computing (BioHPC.net)
Pillardy, J.
2007-01-01
One of the challenges of high-performance computing (HPC) is user accessibility. At the Cornell University Computational Biology Service Unit, which is also a Microsoft HPC institute, we have developed a computational biology application suite that allows researchers from biological laboratories to submit their jobs to the parallel cluster through an easy-to-use Web interface. Through this system, we are providing users with popular bioinformatics tools including BLAST, HMMER, InterproScan, and MrBayes. The system is flexible and can be easily customized to include other software. It is also scalable; the installation on our servers currently processes approximately 8500 job submissions per year, many of them requiring massively parallel computations. It also has a built-in user management system, which can limit software and/or database access to specified users. TAIR, the major database of the plant model organism Arabidopsis, and SGN, the international tomato genome database, are both using our system for storage and data analysis. The system consists of a Web server running the interface (ASP.NET C#), Microsoft SQL server (ADO.NET), compute cluster running Microsoft Windows, ftp server, and file server. Users can interact with their jobs and data via a Web browser, ftp, or e-mail. The interface is accessible at http://cbsuapps.tc.cornell.edu/.
Develop, Build, and Test a Virtual Lab to Support a Vulnerability Training System
2004-09-01
docs.us.dell.com/support/edocs/systems/pe1650/ en /it/index.htm> (20 August 2004) “HOWTO: Installing Web Services with Linux /Tomcat/Apache/Struts...configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web server was configured as the external interface to...1650, dual processor, blade servers were configured as host machines with VMware and VNC running on a Linux RedHat 9 Kernel. An Apache-Tomcat web
High-Fidelity Modeling of Computer Network Worms
2004-06-22
plots the propagation of the TCP-based worm. This execution is among the largest TCP worm models simulated to date at packet-level. TCP vs . UDP Worm...the mapping of the virtual IP addresses to honeyd’s MAC address in the proxy’s ARP table. The proxy server listens for packets from both sides of...experimental setup, we used two ntium-4 ThinkPad , and an IBM Pentium-III ThinkPad ), running the proxy server and honeyd respectively. The Code Red II worm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, X; Liu, L; Xing, L
Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less
NPSNET: Aural cues for virtual world immersion
NASA Astrophysics Data System (ADS)
Dahl, Leif A.
1992-09-01
NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.
OpenVirtualToxLab--a platform for generating and exchanging in silico toxicity data.
Vedani, Angelo; Dobler, Max; Hu, Zhenquan; Smieško, Martin
2015-01-22
The VirtualToxLab is an in silico technology for estimating the toxic potential--endocrine and metabolic disruption, some aspects of carcinogenicity and cardiotoxicity--of drugs, chemicals and natural products. The technology is based on an automated protocol that simulates and quantifies the binding of small molecules towards a series of currently 16 proteins, known or suspected to trigger adverse effects: 10 nuclear receptors (androgen, estrogen α, estrogen β, glucocorticoid, liver X, mineralocorticoid, peroxisome proliferator-activated receptor γ, progesterone, thyroid α, thyroid β), four members of the cytochrome P450 enzyme family (1A2, 2C9, 2D6, 3A4), a cytosolic transcription factor (aryl hydrocarbon receptor) and a potassium ion channel (hERG). The toxic potential of a compound--its ability to trigger adverse effects--is derived from its computed binding affinities toward these very proteins: the computationally demanding simulations are executed in client-server model on a Linux cluster of the University of Basel. The graphical-user interface supports all computer platforms, allows building and uploading molecular structures, inspecting and downloading the results and, most important, rationalizing any prediction at the atomic level by interactively analyzing the binding mode of a compound with its target protein(s) in real-time 3D. Access to the VirtualToxLab is available free of charge for universities, governmental agencies, regulatory bodies and non-profit organizations. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Virtualization in education: Information Security lab in your hands
NASA Astrophysics Data System (ADS)
Karlov, A. A.
2016-09-01
The growing demand for qualified specialists in advanced information technologies poses serious challenges to the education and training of young personnel for science, industry and social problems. Virtualization as a way to isolate the user from the physical characteristics of computing resources (processors, servers, operating systems, networks, applications, etc.), has, in particular, an enormous influence in the field of education, increasing its efficiency, reducing the cost, making it more widely and readily available. The study of Information Security of computer systems is considered as an example of use of virtualization in education.
Development of a high-performance image server using ATM technology
NASA Astrophysics Data System (ADS)
Do Van, Minh; Humphrey, Louis M.; Ravin, Carl E.
1996-05-01
The ability to display digital radiographs to a radiologist in a reasonable time has long been the goal of many PACS. Intelligent routing, or pre-fetching images, has become a solution whereby a system uses a set of rules to route the images to a pre-determined destination. Images would then be stored locally on a workstation for faster display times. Some PACS use a large, centralized storage approach and workstations retrieve images over high bandwidth connections. Another approach to image management is to provide a high performance, clustered storage system. This has the advantage of eliminating the complexity of pre-fetching and allows for rapid image display from anywhere within the hospital. We discuss the development of such a storage device, which provides extremely fast access to images across a local area network. Among the requirements for development of the image server were high performance, DICOM 3.0 compliance, and the use of industry standard components. The completed image server provides performance more than sufficient for use in clinical practice. Setting up modalities to send images to the image server is simple due to the adherence to the DICOM 3.0 specification. Using only off-the-shelf components allows us to keep the cost of the server relatively inexpensive and allows for easy upgrades as technology becomes more advanced. These factors make the image server ideal for use as a clustered storage system in a radiology department.
The design and implementation of web mining in web sites security
NASA Astrophysics Data System (ADS)
Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li
2003-06-01
The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.
Web-based Quality Control Tool used to validate CERES products on a cluster of Linux servers
NASA Astrophysics Data System (ADS)
Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Mlynczak, P.; Mitrescu, C.; Doelling, D.
2014-12-01
There have been a few popular desktop tools used in the Earth Science community to validate science data. Because of the limitation on the capacity of desktop hardware such as disk space and CPUs, those softwares are not able to display large amount of data from files.This poster will talk about an in-house developed web-based software built on a cluster of Linux servers. That allows users to take advantage of a few Linux servers working in parallel to generate hundreds images in a short period of time. The poster will demonstrate:(1) The hardware and software architecture is used to provide high throughput of images. (2) The software structure that can incorporate new products and new requirement quickly. (3) The user interface about how users can manipulate the data and users can control how the images are displayed.
Three-Dimensional Audio Client Library
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.
2005-01-01
The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.
New developments in digital pathology: from telepathology to virtual pathology laboratory.
Kayser, Klaus; Kayser, Gian; Radziszowski, Dominik; Oehmann, Alexander
2004-01-01
To analyse the present status and future development of computerized diagnostic pathology in terms of work-flow integrative telepathology and virtual laboratory. Telepathology has left its childhood. The technical development of telepathology is mature, in contrast to that of virtual pathology. Two kinds of virtual pathology laboratories are emerging: a) those with distributed pathologists and distributed (>=1) laboratories associated to individual biopsy stations/surgical theatres, and b) distributed pathologists working in a centralized laboratory. Both are under technical development. Telepathology can be used for e-learning and e-training in pathology, as exemplarily demonstrated on Digital Lung Pathology Pathology (www.pathology-online.org). A virtual pathology institution (mode a) accepts a complete case with the patient's history, clinical findings, and (pre-selected) images for first diagnosis. The diagnostic responsibility is that of a conventional institution. The internet serves as platform for information transfer, and an open server such as the iPATH (http://telepath.patho.unibas.ch) for coordination and performance of the diagnostic procedure. The size of images has to be limited, and usual different magnifications have to be used. A group of pathologists is "on duty", or selects one member for a predefined duty period. The diagnostic statement of the pathologist(s) on duty is retransmitted to the sender with full responsibility. First experiences of a virtual pathology institution group working with the iPATH server (Dr. L. Banach, Dr. G. Haroske, Dr. I. Hurwitz, Dr. K. Kayser, Dr. K.D. Kunze, Dr. M. Oberholzer,) working with a small hospital of the Salomon islands are promising. A centralized virtual pathology institution (mode b) depends upon the digitalisation of a complete slide, and the transfer of large sized images to different pathologists working in one institution. The technical performance of complete slide digitalisation is still under development and does not completely fulfil the requirements of a conventional pathology institution at present. VIRTUAL PATHOLOGY AND E-LEARNING: At present, e-learning systems are "stand-alone" solutions distributed on CD or via internet. A characteristic example is the Digital Lung Pathology CD (www.pathology-online.org), which includes about 60 different rare and common lung diseases and internet access to scientific library systems (PubMed), distant measurement servers (EuroQuant), or electronic journals (Elec J Pathol Histol). A new and complete data base based upon this CD will combine e-learning and e-teaching with the actual workflow in a virtual pathology institution (mode a). The technological problems are solved and do not depend upon technical constraints such as slide scanning systems. Telepathology serves as promotor for a new landscape in diagnostic pathology, the so-called virtual pathology institution. Industrial and scientific efforts will probably allow an implementation of this technique within the next two years.
A VM-shared desktop virtualization system based on OpenStack
NASA Astrophysics Data System (ADS)
Liu, Xi; Zhu, Mingfa; Xiao, Limin; Jiang, Yuanjie
2018-04-01
With the increasing popularity of cloud computing, desktop virtualization is rising in recent years as a branch of virtualization technology. However, existing desktop virtualization systems are mostly designed as a one-to-one mode, which one VM can only be accessed by one user. Meanwhile, previous desktop virtualization systems perform weakly in terms of response time and cost saving. This paper proposes a novel VM-Shared desktop virtualization system based on OpenStack platform. The paper modified the connecting process and the display data transmission process of the remote display protocol SPICE to support VM-Shared function. On the other hand, we propose a server-push display mode to improve user interactive experience. The experimental results show that our system performs well in response time and achieves a low CPU consumption.
Using Cluster Analysis for Data Mining in Educational Technology Research
ERIC Educational Resources Information Center
Antonenko, Pavlo D.; Toy, Serkan; Niederhauser, Dale S.
2012-01-01
Cluster analysis is a group of statistical methods that has great potential for analyzing the vast amounts of web server-log data to understand student learning from hyperlinked information resources. In this methodological paper we provide an introduction to cluster analysis for educational technology researchers and illustrate its use through…
USDA-ARS?s Scientific Manuscript database
Genome wide analysis of orthologous clusters is an important component of comparative genomics studies. Identifying the overlap among orthologous clusters can enable us to elucidate the function and evolution of proteins across multiple species. Here, we report a web platform named OrthoVenn that i...
NASA Astrophysics Data System (ADS)
Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.
2016-12-01
Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.
NASA Cloud-Based Climate Data Services
NASA Astrophysics Data System (ADS)
McInerney, M. A.; Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, W. D., III; Thompson, J. H.; Gill, R.; Jasen, J. E.; Samowich, B.; Pobre, Z.; Salmon, E. M.; Rumney, G.; Schardt, T. D.
2012-12-01
Cloud-based scientific data services are becoming an important part of NASA's mission. Our technological response is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service (VaaS). A virtual climate data server (vCDS) is an Open Archive Information System (OAIS) compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have deployed vCDS Version 1.0 in the Amazon EC2 cloud using S3 object storage and are using the system to deliver a subset of NASA's Intergovernmental Panel on Climate Change (IPCC) data products to the latest CentOS federated version of Earth System Grid Federation (ESGF), which is also running in the Amazon cloud. vCDS-managed objects are exposed to ESGF through FUSE (Filesystem in User Space), which presents a POSIX-compliant filesystem abstraction to applications such as the ESGF server that require such an interface. A vCDS manages data as a distinguished collection for a person, project, lab, or other logical unit. A vCDS can manage a collection across multiple storage resources using rules and microservices to enforce collection policies. And a vCDS can federate with other vCDSs to manage multiple collections over multiple resources, thereby creating what can be thought of as an ecosystem of managed collections. With the vCDS approach, we are trying to enable the full information lifecycle management of scientific data collections and make tractable the task of providing diverse climate data services. In this presentation, we describe our approach, experiences, lessons learned, and plans for the future.; (A) vCDS/ESG system stack. (B) Conceptual architecture for NASA cloud-based data services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Tim
Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less
Jin, Wenquan; Kim, DoHyeun
2018-05-26
The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-09-25
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-01-01
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731
Research on virtual network load balancing based on OpenFlow
NASA Astrophysics Data System (ADS)
Peng, Rong; Ding, Lei
2017-08-01
The Network based on OpenFlow technology separate the control module and data forwarding module. Global deployment of load balancing strategy through network view of control plane is fast and of high efficiency. This paper proposes a Weighted Round-Robin Scheduling algorithm for virtual network and a load balancing plan for server load based on OpenFlow. Load of service nodes and load balancing tasks distribution algorithm will be taken into account.
Virtual Network Configuration Management System for Data Center Operations and Management
NASA Astrophysics Data System (ADS)
Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken
Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.
WordCluster: detecting clusters of DNA words and genomic elements
2011-01-01
Background Many k-mers (or DNA words) and genomic elements are known to be spatially clustered in the genome. Well established examples are the genes, TFBSs, CpG dinucleotides, microRNA genes and ultra-conserved non-coding regions. Currently, no algorithm exists to find these clusters in a statistically comprehensible way. The detection of clustering often relies on densities and sliding-window approaches or arbitrarily chosen distance thresholds. Results We introduce here an algorithm to detect clusters of DNA words (k-mers), or any other genomic element, based on the distance between consecutive copies and an assigned statistical significance. We implemented the method into a web server connected to a MySQL backend, which also determines the co-localization with gene annotations. We demonstrate the usefulness of this approach by detecting the clusters of CAG/CTG (cytosine contexts that can be methylated in undifferentiated cells), showing that the degree of methylation vary drastically between inside and outside of the clusters. As another example, we used WordCluster to search for statistically significant clusters of olfactory receptor (OR) genes in the human genome. Conclusions WordCluster seems to predict biological meaningful clusters of DNA words (k-mers) and genomic entities. The implementation of the method into a web server is available at http://bioinfo2.ugr.es/wordCluster/wordCluster.php including additional features like the detection of co-localization with gene regions or the annotation enrichment tool for functional analysis of overlapped genes. PMID:21261981
Mining the SDSS SkyServer SQL queries log
NASA Astrophysics Data System (ADS)
Hirota, Vitor M.; Santos, Rafael; Raddick, Jordan; Thakar, Ani
2016-05-01
SkyServer, the Internet portal for the Sloan Digital Sky Survey (SDSS) astronomic catalog, provides a set of tools that allows data access for astronomers and scientific education. One of SkyServer data access interfaces allows users to enter ad-hoc SQL statements to query the catalog. SkyServer also presents some template queries that can be used as basis for more complex queries. This interface has logged over 330 million queries submitted since 2001. It is expected that analysis of this data can be used to investigate usage patterns, identify potential new classes of queries, find similar queries, etc. and to shed some light on how users interact with the Sloan Digital Sky Survey data and how scientists have adopted the new paradigm of e-Science, which could in turn lead to enhancements on the user interfaces and experience in general. In this paper we review some approaches to SQL query mining, apply the traditional techniques used in the literature and present lessons learned, namely, that the general text mining approach for feature extraction and clustering does not seem to be adequate for this type of data, and, most importantly, we find that this type of analysis can result in very different queries being clustered together.
Efficient operating system level virtualization techniques for cloud resources
NASA Astrophysics Data System (ADS)
Ansu, R.; Samiksha; Anju, S.; Singh, K. John
2017-11-01
Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.
QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm.
Bao, Ying; Lei, Weimin; Zhang, Wei; Zhan, Yuzhuo
2016-01-01
At present, to realize or improve the quality of experience (QoE) is a major goal for network media transmission service, and QoE evaluation is the basis for adjusting the transmission control mechanism. Therefore, a kind of QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm is proposed in this paper, which is concentrated on service score calculation at the server side. The server side collects network transmission quality of service (QoS) parameter, node location data, and user expectation value from client feedback information. Then it manages the historical data in database through the "big data" process mode, and predicts user score according to heuristic rules. On this basis, it completes fuzzy clustering analysis, and generates service QoE score and management message, which will be finally fed back to clients. Besides, this paper mainly discussed service evaluation generative rules, heuristic evaluation rules and fuzzy clustering analysis methods, and presents service-based QoE evaluation processes. The simulation experiments have verified the effectiveness of QoE collaborative evaluation method based on fuzzy clustering heuristic rules.
Rajan, J Pandia; Rajan, S Edward
2018-01-01
Wireless physiological signal monitoring system designing with secured data communication in the health care system is an important and dynamic process. We propose a signal monitoring system using NI myRIO connected with the wireless body sensor network through multi-channel signal acquisition method. Based on the server side validation of the signal, the data connected to the local server is updated in the cloud. The Internet of Things (IoT) architecture is used to get the mobility and fast access of patient data to healthcare service providers. This research work proposes a novel architecture for wireless physiological signal monitoring system using ubiquitous healthcare services by virtual Internet of Things. We showed an improvement in method of access and real time dynamic monitoring of physiological signal of this remote monitoring system using virtual Internet of thing approach. This remote monitoring and access system is evaluated in conventional value. This proposed system is envisioned to modern smart health care system by high utility and user friendly in clinical applications. We claim that the proposed scheme significantly improves the accuracy of the remote monitoring system compared to the other wireless communication methods in clinical system.
Cross-standard user description in mobile, medical oriented virtual collaborative environments
NASA Astrophysics Data System (ADS)
Ganji, Rama Rao; Mitrea, Mihai; Joveski, Bojan; Chammem, Afef
2015-03-01
By combining four different open standards belonging to the ISO/IEC JTC1/SC29 WG11 (a.k.a. MPEG) and W3C, this paper advances an architecture for mobile, medical oriented virtual collaborative environments. The various users are represented according to MPEG-UD (MPEG User Description) while the security issues are dealt with by deploying the WebID principles. On the server side, irrespective of their elementary types (text, image, video, 3D, …), the medical data are aggregated into hierarchical, interactive multimedia scenes which are alternatively represented into MPEG-4 BiFS or HTML5 standards. This way, each type of content can be optimally encoded according to its particular constraints (semantic, medical practice, network conditions, etc.). The mobile device should ensure only the displaying of the content (inside an MPEG player or an HTML5 browser) and the capturing of the user interaction. The overall architecture is implemented and tested under the framework of the MEDUSA European project, in partnership with medical institutions. The testbed considers a server emulated by a PC and heterogeneous user devices (tablets, smartphones, laptops) running under iOS, Android and Windows operating systems. The connection between the users and the server is alternatively ensured by WiFi and 3G/4G networks.
ATM LAN Emulation: Getting from Here to There.
ERIC Educational Resources Information Center
Learn, Larry L., Ed.
1995-01-01
Discusses current LAN (local area network) configuration and explains ATM (asynchronous transfer mode) as the future telecommunications transport. Highlights include LAN emulation, which enables the interconnection of legacy LANs and the new ATM environment; virtual LANs; broadcast servers; and standards. (LRW)
Virtualization in the Operations Environments
NASA Technical Reports Server (NTRS)
Pitts, Lee; Lankford, Kim; Felton, Larry; Pruitt, Robert
2010-01-01
Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.
2009-09-01
Shop 35-3031 Waiters and Waitresses 35-3041 Food Servers, Nonrestaurant 35-9011 Dining Room and Cafeteria Attendants and Bartender Helpers...owning or living on a farm: OCCSOC5 entry Occupations likely to own a farm 45-1011 First-line supervisors/ managers of farming, fishing, and
PREP: Portal for Readiness Exercises & Planning v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noel, Todd; Le, Tam; McNeil, Carrie
2016-10-28
The software includes a web-based template for recording actions taken during emergency preparedness exercises and planning workshops. In addition, a virtual outbreak prevention simulation exercise is also included. Both tools interact with a server which records user decisions and communications.
A General Purpose High Performance Linux Installation Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wachsmann, Alf
2002-06-17
With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then usesmore » kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.« less
An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data
2016-01-01
these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation
Application of new technologies in the virtual library: Seminars in Turkey, Portugal, and Spain
NASA Technical Reports Server (NTRS)
Hunter, Judy F.; Cotter, Gladys A.
1994-01-01
This paper focuses on the technologies that are available today to support the concept of a virtual library. The concept of a 'virtual library' or a 'library without walls' is meant to convey the idea that information in any format should be available to the end-user from the desktop as if it were located on the local workstation. Discussed here are the background, trends, technology enablers, end-user requirements, and the NASA Access Mechanism (NAM) system, one example of how it is possible to apply existing technologies to the client server architecture to logically centralize geographically distributed applications and information.
Pilot-in-the-Loop CFD Method Development
2014-06-16
CFD analysis. Coupled simulations will be run at PSU on the COCOA -4 cluster, a high performance computing cluster. The CRUNCH CFD software has...been installed on the COCOA -4 servers and initial software tests are being conducted. Initial efforts will use the Generic Frigate Shape SFS-2 to
Intelligent web agents for a 3D virtual community
NASA Astrophysics Data System (ADS)
Dave, T. M.; Zhang, Yanqing; Owen, G. S. S.; Sunderraman, Rajshekhar
2003-08-01
In this paper, we propose an Avatar-based intelligent agent technique for 3D Web based Virtual Communities based on distributed artificial intelligence, intelligent agent techniques, and databases and knowledge bases in a digital library. One of the goals of this joint NSF (IIS-9980130) and ACM SIGGRAPH Education Committee (ASEC) project is to create a virtual community of educators and students who have a common interest in comptuer graphics, visualization, and interactive techniqeus. In this virtual community (ASEC World) Avatars will represent the educators, students, and other visitors to the world. Intelligent agents represented as specially dressed Avatars will be available to assist the visitors to ASEC World. The basic Web client-server architecture of the intelligent knowledge-based avatars is given. Importantly, the intelligent Web agent software system for the 3D virtual community is implemented successfully.
Pathogenicity in POLG syndromes: DNA polymerase gamma pathogenicity prediction server and database.
Nurminen, Anssi; Farnum, Gregory A; Kaguni, Laurie S
2017-06-01
DNA polymerase gamma (POLG) is the replicative polymerase responsible for maintaining mitochondrial DNA (mtDNA). Disorders related to its functionality are a major cause of mitochondrial disease. The clinical spectrum of POLG syndromes includes Alpers-Huttenlocher syndrome (AHS), childhood myocerebrohepatopathy spectrum (MCHS), myoclonic epilepsy myopathy sensory ataxia (MEMSA), the ataxia neuropathy spectrum (ANS) and progressive external ophthalmoplegia (PEO). We have collected all publicly available POLG-related patient data and analyzed it using our pathogenic clustering model to provide a new research and clinical tool in the form of an online server. The server evaluates the pathogenicity of both previously reported and novel mutations. There are currently 176 unique point mutations reported and found in mitochondrial patients in the gene encoding the catalytic subunit of POLG, POLG . The mutations are distributed nearly uniformly along the length of the primary amino acid sequence of the gene. Our analysis shows that most of the mutations are recessive, and that the reported dominant mutations cluster within the polymerase active site in the tertiary structure of the POLG enzyme. The POLG Pathogenicity Prediction Server (http://polg.bmb.msu.edu) is targeted at clinicians and scientists studying POLG disorders, and aims to provide the most current available information regarding the pathogenicity of POLG mutations.
ICCE/ICCAI 2000 Full & Short Papers (Creative Learning).
ERIC Educational Resources Information Center
2000
This document contains the following full and short papers on creative learning from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "A Collaborative Learning Support System Based on Virtual Environment Server for Multiple Agents" (Takashi Ohno, Kenji…
Hardware Support for Malware Defense and End-to-End Trust
2017-02-01
IoT) sensors and actuators, mobile devices and servers; cloud based, stand alone, and traditional mainframes. The prototype developed demonstrated...virtual machines. For mobile platforms we developed and prototyped an architecture supporting separation of personalities on the same platform...4 3.1. MOBILE
G12V Kras mutations in cervical cancer under virtual microscope of molecular dynamics simulations.
Chen, X P; Xu, W H; Xu, D F; Fu, S M; Ma, Z C
2016-01-01
Kras mutations and cancers are common and their role in the progression of cancer is well known and elucidated. The present work is searching for the most deleterious mutation of the four found at codon 12 and 13 of Kras in cervical cancers using prediction servers; different servers were used to look into different factors that govern the protein function. The in silico results predicted G12V to be the most devastating; this particular mutation was then subjected to molecular dynamics simulation (MDS) for further analysis. The authors' approach of MDSs helped them to place the native and mutant structure under virtual microscope and observe their dynamics over time. The results generated are enlightening the effect of G12V variation on the dynamics of Kras. The structural variation between the native and mutant Kras over 50 nanoseconds (ns) run varied at every parameter checked and the results are in excellent agreement with the available experimental data.
New Web Server - the Java Version of Tempest - Produced
NASA Technical Reports Server (NTRS)
York, David W.; Ponyik, Joseph G.
2000-01-01
A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
LigSearch: a knowledge-based web server to identify likely ligands for a protein target
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, Tjaart A. P. de; Laskowski, Roman A.; Duban, Mark-Eugene
LigSearch is a web server for identifying ligands likely to bind to a given protein. Identifying which ligands might bind to a protein before crystallization trials could provide a significant saving in time and resources. LigSearch, a web server aimed at predicting ligands that might bind to and stabilize a given protein, has been developed. Using a protein sequence and/or structure, the system searches against a variety of databases, combining available knowledge, and provides a clustered and ranked output of possible ligands. LigSearch can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/LigSearch.
DelPhiPKa web server: predicting pKa of proteins, RNAs and DNAs.
Wang, Lin; Zhang, Min; Alexov, Emil
2016-02-15
A new pKa prediction web server is released, which implements DelPhi Gaussian dielectric function to calculate electrostatic potentials generated by charges of biomolecules. Topology parameters are extended to include atomic information of nucleotides of RNA and DNA, which extends the capability of pKa calculations beyond proteins. The web server allows the end-user to protonate the biomolecule at particular pH based on calculated pKa values and provides the downloadable file in PQR format. Several tests are performed to benchmark the accuracy and speed of the protocol. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhiPKa program. The computation is performed on the Palmetto supercomputer cluster and results/download links are given back to the end-user via http protocol. The web server takes advantage of MPI parallel implementation in DelPhiPKa and can run a single job on up to 24 CPUs. The DelPhiPKa web server is available at http://compbio.clemson.edu/pka_webserver. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
"Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation
ERIC Educational Resources Information Center
Sangpetch, Akkarit
2013-01-01
Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…
CICS Region Virtualization for Cost Effective Application Development
ERIC Educational Resources Information Center
Khan, Kamal Waris
2012-01-01
Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…
Evaluating and Implementing Learning Environments: A United Kingdom Experience.
ERIC Educational Resources Information Center
Ingraham, Bruce; Watson, Barbara; McDowell, Liz; Brockett, Adrian; Fitzpatrick, Simon
2002-01-01
Reports on ongoing work at five universities in northeastern England that have been evaluating and implementing online learning environments known as virtual learning environments (VLEs) or managed learning environments (MLEs). Discusses do-it-yourself versus commercial systems; transferability; Web-based versus client-server; integration with…
From telepathology to virtual pathology institution: the new world of digital pathology.
Kayser, K; Kayser, G; Radziszowski, D; Oehmann, A
Telepathology has left its childhood. Its technical development is mature, and its use for primary (frozen section) and secondary (expert consultation) diagnosis has been expanded to a great amount. This is in contrast to a virtual pathology laboratory, which is still under technical constraints. Similar to telepathology, which can also be used for e-learning and e-training in pathology, as exemplarily is demonstrated on Digital Lung Pathology (Klaus.Kayser@charite.de) at least two kinds of virtual pathology laboratories will be implemented in the near future: a) those with distributed pathologists and distributed (> or = 1) laboratories associated to individual biopsy stations/surgical theatres, and b) distributed pathologists (usually situated in one institution) and a centralized laboratory, which digitizes complete histological slides. Both scenarios are under intensive technical investigations. The features of virtual pathology comprise a virtual pathology institution (mode a) that accepts a complete case with the patient's history, clinical findings, and (pre-selected) images for first diagnosis. The diagnostic responsibility is that of a conventional institution. The Internet serves as platform for information transfer, and an open server such as the iPATH (http://telepath.patho.unibas.ch) for coordination and performance of the diagnostic procedure. The size and number of transferred images have to be limited, and usual different magnifications have to be used. The sender needs to possess experiences in image sampling techniques, which present with the most significant information. A group of pathologists is "on duty", or selects one member for a predefined duty period. The diagnostic statement of the pathologist(s) on duty is retransmitted to the sender with full responsibility. The first experiences of a virtual pathology institution group working with the iPATH server working with a small hospital of the Salomon islands are promising. A centralized virtual pathology institution (mode b) depends upon the digitalization of a complete slide, and the transfer of large sized images to different pathologists working in one institution. The technical performance of complete slide digitalization is still under development. Virtual pathology can be combined with e-learning and e-training, that will serve for a powerful daily-work-integrated pathology system. At present, e-learning systems are "stand-alone" solutions distributed on CD or via Internet. A characteristic example is the Digital Lung Pathology CD, which includes about 60 different rare and common lung diseases with some features of electronic communication. These features include access to scientific library systems (PubMed), distant measurement servers (EuroQuant), automated immunohisto-chemistry measurements, or electronic journals (Elec J Pathol Histol, www.pathology-online.org). It combines e-learning and e-training with some acoustic support. A new and complete database based upon this CD will combine e-learning and e-teaching with the actual workflow in a virtual pathology institution (mode a). The technological problems are solved and do not depend upon technical constraints such as slide scanning systems. At present, telepathology serves as promoter for a complete new landscape in diagnostic pathology, the so-called virtual pathology institution. Industrial and scientific efforts will probably allow an implementation of this technique within the next two years with exciting diagnostic and scientific perspectives.
Secure data aggregation in heterogeneous and disparate networks using stand off server architecture
NASA Astrophysics Data System (ADS)
Vimalathithan, S.; Sudarsan, S. D.; Seker, R.; Lenin, R. B.; Ramaswamy, S.
2009-04-01
The emerging global reach of technology presents myriad challenges and intricacies as Information Technology teams aim to provide anywhere, anytime and anyone access, for service providers and customers alike. The world is fraught with stifling inequalities, both from an economic as well as socio-political perspective. The net result has been large capability gaps between various organizational locations that need to work together, which has raised new challenges for information security teams. Similar issues arise, when mergers and acquisitions among and between organizations take place. While integrating remote business locations with mainstream operations, one or more of the issues including the lack of application level support, computational capabilities, communication limitations, and legal requirements cause a serious impediment thereby complicating integration while not violating the organizations' security requirements. Often resorted techniques like IPSec, tunneling, secure socket layer, etc. may not be always techno-economically feasible. This paper addresses such security issues by introducing an intermediate server between corporate central server and remote sites, called stand-off-server. We present techniques such as break-before-make connection, break connection after transfer, multiple virtual machine instances with different operating systems using the concept of a stand-off-server. Our experiments show that the proposed solution provides sufficient isolation for the central server/site from attacks arising out of weak communication and/or computing links and is simple to implement.
ICM: a web server for integrated clustering of multi-dimensional biomedical data.
He, Song; He, Haochen; Xu, Wenjian; Huang, Xin; Jiang, Shuai; Li, Fei; He, Fuchu; Bo, Xiaochen
2016-07-08
Large-scale efforts for parallel acquisition of multi-omics profiling continue to generate extensive amounts of multi-dimensional biomedical data. Thus, integrated clustering of multiple types of omics data is essential for developing individual-based treatments and precision medicine. However, while rapid progress has been made, methods for integrated clustering are lacking an intuitive web interface that facilitates the biomedical researchers without sufficient programming skills. Here, we present a web tool, named Integrated Clustering of Multi-dimensional biomedical data (ICM), that provides an interface from which to fuse, cluster and visualize multi-dimensional biomedical data and knowledge. With ICM, users can explore the heterogeneity of a disease or a biological process by identifying subgroups of patients. The results obtained can then be interactively modified by using an intuitive user interface. Researchers can also exchange the results from ICM with collaborators via a web link containing a Project ID number that will directly pull up the analysis results being shared. ICM also support incremental clustering that allows users to add new sample data into the data of a previous study to obtain a clustering result. Currently, the ICM web server is available with no login requirement and at no cost at http://biotech.bmi.ac.cn/icm/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Red Hat Enterprise Virtualization - KVM-based infrastructure services at BNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cortijo, D.
2011-06-14
Over the past 18 months, BNL has moved a large percentage of its Linux-based servers and services into a Red Hat Enterprise Virtualization (RHEV) environment. This presentation will address our approach to virtualization, critical decision points, and a discussion of our implementation. Specific topics will include an overview of hardware and software requirements, networking, and storage; discussion of the decision of Red Hat solution over competing products (VMWare, Xen, etc); details on some of the features of RHEV - both current and on their roadmap; Review of performance and reliability gains since deployment completion; path forward for RHEV at BNLmore » and caveats and potential problems.« less
A Framework For Fault Tolerance In Virtualized Servers
2016-06-01
effects into the system. Decrease in performance, the expansion in the total system size and weight, and a hike in the system cost can be counted in... benefit also shines out in terms of reliability. 41 4. How Data Guard Synchronizes Standby Databases Primary and standby databases in Oracle Data
MATREX: A Unifying Modeling and Simulation Architecture for Live-Virtual-Constructive Applications
2007-05-23
Deployment Systems Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems...CMS2 – Comprehensive Munitions & Sensor Server • CSAT – C4ISR Static Analysis Tool • C4ISR – Command & Control, Communications, Computers
Improving Business-IT Alignment through Business Architecture
ERIC Educational Resources Information Center
Li, Chingmei
2010-01-01
The business and Information Technology (IT) alignment issue has become one of the Top-10 IT management issues since 1980. IT has continually strived to achieve alignment with business goals and objectives. These IT efforts include ERP implementation to benefit from the best practices; data center consolidation and server virtualization to keep…
Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF
NASA Technical Reports Server (NTRS)
Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.
2001-01-01
The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.
Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines
NASA Astrophysics Data System (ADS)
Ivanovic, Pavle; Richter, Harald
2018-01-01
High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Chin
This Technical Note describes how the Zettar team came up with a data transfer cluster design that convincingly proved the feasibility of using high-density servers for high-performance Big Data transfers. It then outlines the tests, operations, and observations that address a potential over-heating concern regarding the use of Non-Volatile Memory Host Controller Interface Specification (NVMHCI aka NVM Express or NVMe) Gen 3 PCIe SSD cards in high-density servers. Finally, it points out the possibility of developing a new generation of high-performance Science DMZ data transfer system for the data-intensive research community and commercial enterprises.
Methodology to model the energy and greenhouse gas emissions of electronic software distributions.
Williams, Daniel R; Tang, Yinshan
2012-01-17
A new electronic software distribution (ESD) life cycle analysis (LCA) methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative, physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO(2)e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO(2)e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
AGGRESCAN3D (A3D): server for prediction of aggregation properties of protein structures
Zambrano, Rafael; Jamroz, Michal; Szczasiuk, Agata; Pujols, Jordi; Kmiecik, Sebastian; Ventura, Salvador
2015-01-01
Protein aggregation underlies an increasing number of disorders and constitutes a major bottleneck in the development of therapeutic proteins. Our present understanding on the molecular determinants of protein aggregation has crystalized in a series of predictive algorithms to identify aggregation-prone sites. A majority of these methods rely only on sequence. Therefore, they find difficulties to predict the aggregation properties of folded globular proteins, where aggregation-prone sites are often not contiguous in sequence or buried inside the native structure. The AGGRESCAN3D (A3D) server overcomes these limitations by taking into account the protein structure and the experimental aggregation propensity scale from the well-established AGGRESCAN method. Using the A3D server, the identified aggregation-prone residues can be virtually mutated to design variants with increased solubility, or to test the impact of pathogenic mutations. Additionally, A3D server enables to take into account the dynamic fluctuations of protein structure in solution, which may influence aggregation propensity. This is possible in A3D Dynamic Mode that exploits the CABS-flex approach for the fast simulations of flexibility of globular proteins. The A3D server can be accessed at http://biocomp.chem.uw.edu.pl/A3D/. PMID:25883144
Empirical cost models for estimating power and energy consumption in database servers
NASA Astrophysics Data System (ADS)
Valdivia Garcia, Harold Dwight
The explosive growth in the size of data centers, coupled with the widespread use of virtualization technology has brought power and energy consumption as major concerns for data center administrators. Provisioning decisions must take into consideration not only target application performance but also the power demands and total energy consumption incurred by the hardware and software to be deployed at the data center. Failure to do so will result in damaged equipment, power outages, and inefficient operation. Since database servers comprise one of the most popular and important server applications deployed in such facilities, it becomes necessary to have accurate cost models that can predict the power and energy demands that each database workloads will impose in the system. In this work we present an empirical methodology to estimate the power and energy cost of database operations. Our methodology uses multiple-linear regression to derive accurate cost models that depend only on readily available statistics such as selectivity factors, tuple size, numbers columns and relational cardinality. Moreover, our method does not need measurement of individual hardware components, but rather total power and energy consumption measured at a server. We have implemented our methodology, and ran experiments with several server configurations. Our experiments indicate that we can predict power and energy more accurately than alternative methods found in the literature.
Zhu, Lingyun; Li, Lianjie; Meng, Chunyan
2014-12-01
There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.
Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason
2010-01-01
Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.
Real Time Monitor of Grid job executions
NASA Astrophysics Data System (ADS)
Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.
2010-04-01
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.
Sequence harmony: detecting functional specificity from alignments
Feenstra, K. Anton; Pirovano, Walter; Krab, Klaas; Heringa, Jaap
2007-01-01
Multiple sequence alignments are often used for the identification of key specificity-determining residues within protein families. We present a web server implementation of the Sequence Harmony (SH) method previously introduced. SH accurately detects subfamily specific positions from a multiple alignment by scoring compositional differences between subfamilies, without imposing conservation. The SH web server allows a quick selection of subtype specific sites from a multiple alignment given a subfamily grouping. In addition, it allows the predicted sites to be directly mapped onto a protein structure and displayed. We demonstrate the use of the SH server using the family of plant mitochondrial alternative oxidases (AOX). In addition, we illustrate the usefulness of combining sequence and structural information by showing that the predicted sites are clustered into a few distinct regions in an AOX homology model. The SH web server can be accessed at www.ibi.vu.nl/programs/seqharmwww. PMID:17584793
Eccher, C; Berloffa, F; Demichelis, F; Larcher, B; Galvagni, M; Sboner, A; Graiff, A; Forti, S
1999-01-01
Introduction This study describes a tele-consultation system (TCS) developed to provide a computing environment over a Wide Area Network (WAN) in North Italy (Province of Trento), that can be used by two or more physicians to share medical data and to work co-operatively on medical records. A pilot study has been carried out in oncology to assess the effectiveness of the system. The aim of this project is to facilitate the management of oncology patients by improving communication among the specialists of central and district hospitals. Methods and Results The TCS is an Intranet-based solution. The Intranet is based on a PC WAN with Windows NT Server, Microsoft SQL Server, and Internet Information Server. TCS is composed of native and custom applications developed in the Microsoft Windows (9x and NT) environment. The basic component of the system is the multimedia digital medical record, structured as a collection of HTML and ASP pages. A distributed relational database will allow users to store and retrieve medical records, accessed by a dedicated Web browser via the Web Server. The medical data to be stored and the presentation architecture of the clinical record had been determined in close collaboration with the clinicians involved in the project. TCS will allow a multi-point tele-consultation (TC) among two or more participants on remote computers, providing synchronized surfing through the clinical report. A set of collaborative and personal tools, whiteboard with drawing tools, point-to-point digital audio-conference, chat, local notepad, e-mail service, are integrated in the system to provide an user friendly environment. TCS has been developed as a client-server architecture. The client part of the system is based on the Microsoft Web Browser control and provides the user interface and the tools described above. The server part, running all the time on a dedicated computer, accepts connection requests and manages the connections among the participants in a TC, allowing multiple TC to run simultaneously. TCS has been developed in Visual C++ environment using MFC library and COM technology; ActiveX controls have been written in Visual Basic to perform dedicated tasks from the inside of the HTML clinical report. Before deploying the system in the hospital departments involved in the project, TCS has been tested in our laboratory by clinicians involved in the project to evaluate the usability of the system. Discussion TCS has the potential to support a "multi-disciplinary distributed virtual oncological meeting". The specialists of different departments and of different hospitals can attend "virtual meetings" and interactively discuss on medical data. An expected benefit of the "virtual meeting" should be the possibility to provide expert remote advice from oncologists to peripheral cancer units in formulating treatment plans, conducting follow-up sessions and supporting clinical research.
Agentless Cloud-Wide Monitoring of Virtual Disk State
2015-10-01
packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many
Default Parallels Plesk Panel Page
services that small businesses want and need. Our software includes key building blocks of cloud service virtualized servers Service Provider Products Parallels® Automation Hosting, SaaS, and cloud computing , the leading hosting automation software. You see this page because there is no Web site at this
Extensible 3D (X3D) Earth Technical Requirements Workshop Summary Report
2007-08-01
world in detail already, but rarely interconnect on to another • Most interesting part of “virtual reality” (VR) is reality – which means physics... Two Web-Enabled Modeling and Simulation (WebSim) symposia have demonstrated that large partnerships can work 9. Server-side 3D graphics • Our
Decision Support Systems for Launch and Range Operations Using Jess
NASA Technical Reports Server (NTRS)
Thirumalainambi, Rajkumar
2007-01-01
The virtual test bed for launch and range operations developed at NASA Ames Research Center consists of various independent expert systems advising on weather effects, toxic gas dispersions and human health risk assessment during space-flight operations. An individual dedicated server supports each expert system and the master system gather information from the dedicated servers to support the launch decision-making process. Since the test bed is based on the web system, reducing network traffic and optimizing the knowledge base is critical to its success of real-time or near real-time operations. Jess, a fast rule engine and powerful scripting environment developed at Sandia National Laboratory has been adopted to build the expert systems providing robustness and scalability. Jess also supports XML representation of knowledge base with forward and backward chaining inference mechanism. Facts added - to working memory during run-time operations facilitates analyses of multiple scenarios. Knowledge base can be distributed with one inference engine performing the inference process. This paper discusses details of the knowledge base and inference engine using Jess for a launch and range virtual test bed.
Ligand.Info small-molecule Meta-Database.
von Grotthuss, Marcin; Koczyk, Grzegorz; Pas, Jakub; Wyrwicz, Lucjan S; Rychlewski, Leszek
2004-12-01
Ligand.Info is a compilation of various publicly available databases of small molecules. The total size of the Meta-Database is over 1 million entries. The compound records contain calculated three-dimensional coordinates and sometimes information about biological activity. Some molecules have information about FDA drug approving status or about anti-HIV activity. Meta-Database can be downloaded from the http://Ligand.Info web page. The database can also be screened using a Java-based tool. The tool can interactively cluster sets of molecules on the user side and automatically download similar molecules from the server. The application requires the Java Runtime Environment 1.4 or higher, which can be automatically downloaded from Sun Microsystems or Apple Computer and installed during the first use of Ligand.Info on desktop systems, which support Java (Ms Windows, Mac OS, Solaris, and Linux). The Ligand.Info Meta-Database can be used for virtual high-throughput screening of new potential drugs. Presented examples showed that using a known antiviral drug as query the system was able to find others antiviral drugs and inhibitors.
Dynamically Allocated Virtual Clustering Management System Users Guide
2016-11-01
provides usage instructions for the DAVC version 2.0 web application. 15. SUBJECT TERMS DAVC, Dynamically Allocated Virtual Clustering...This report provides usage instructions for the DAVC version 2.0 web application. This report is separated into the following sections, which detail
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
Kononowicz, Andrzej A; Berman, Anne H; Stathakarou, Natalia; McGrath, Cormac; Bartyński, Tomasz; Nowakowski, Piotr; Malawski, Maciej; Zary, Nabil
2015-09-10
Massive open online courses (MOOCs) have been criticized for focusing on presentation of short video clip lectures and asking theoretical multiple-choice questions. A potential way of vitalizing these educational activities in the health sciences is to introduce virtual patients. Experiences from such extensions in MOOCs have not previously been reported in the literature. This study analyzes technical challenges and solutions for offering virtual patients in health-related MOOCs and describes patterns of virtual patient use in one such course. Our aims are to reduce the technical uncertainty related to these extensions, point to aspects that could be optimized for a better learner experience, and raise prospective research questions by describing indicators of virtual patient use on a massive scale. The Behavioral Medicine MOOC was offered by Karolinska Institutet, a medical university, on the EdX platform in the autumn of 2014. Course content was enhanced by two virtual patient scenarios presented in the OpenLabyrinth system and hosted on the VPH-Share cloud infrastructure. We analyzed web server and session logs and a participant satisfaction survey. Navigation pathways were summarized using a visual analytics tool developed for the purpose of this study. The number of course enrollments reached 19,236. At the official closing date, 2317 participants (12.1% of total enrollment) had declared completing the first virtual patient assignment and 1640 (8.5%) participants confirmed completion of the second virtual patient assignment. Peak activity involved 359 user sessions per day. The OpenLabyrinth system, deployed on four virtual servers, coped well with the workload. Participant survey respondents (n=479) regarded the activity as a helpful exercise in the course (83.1%). Technical challenges reported involved poor or restricted access to videos in certain areas of the world and occasional problems with lost sessions. The visual analyses of user pathways display the parts of virtual patient scenarios that elicited less interest and may have been perceived as nonchallenging options. Analyzing the user navigation pathways allowed us to detect indications of both surface and deep approaches to the content material among the MOOC participants. This study reported on first inclusion of virtual patients in a MOOC. It adds to the body of knowledge by demonstrating how a biomedical cloud provider service can ensure technical capacity and flexible design of a virtual patient platform on a massive scale. The study also presents a new way of analyzing the use of branched virtual patients by visualization of user navigation pathways. Suggestions are offered on improvements to the design of virtual patients in MOOCs.
Berman, Anne H; Stathakarou, Natalia; McGrath, Cormac; Bartyński, Tomasz; Nowakowski, Piotr; Malawski, Maciej; Zary, Nabil
2015-01-01
Background Massive open online courses (MOOCs) have been criticized for focusing on presentation of short video clip lectures and asking theoretical multiple-choice questions. A potential way of vitalizing these educational activities in the health sciences is to introduce virtual patients. Experiences from such extensions in MOOCs have not previously been reported in the literature. Objective This study analyzes technical challenges and solutions for offering virtual patients in health-related MOOCs and describes patterns of virtual patient use in one such course. Our aims are to reduce the technical uncertainty related to these extensions, point to aspects that could be optimized for a better learner experience, and raise prospective research questions by describing indicators of virtual patient use on a massive scale. Methods The Behavioral Medicine MOOC was offered by Karolinska Institutet, a medical university, on the EdX platform in the autumn of 2014. Course content was enhanced by two virtual patient scenarios presented in the OpenLabyrinth system and hosted on the VPH-Share cloud infrastructure. We analyzed web server and session logs and a participant satisfaction survey. Navigation pathways were summarized using a visual analytics tool developed for the purpose of this study. Results The number of course enrollments reached 19,236. At the official closing date, 2317 participants (12.1% of total enrollment) had declared completing the first virtual patient assignment and 1640 (8.5%) participants confirmed completion of the second virtual patient assignment. Peak activity involved 359 user sessions per day. The OpenLabyrinth system, deployed on four virtual servers, coped well with the workload. Participant survey respondents (n=479) regarded the activity as a helpful exercise in the course (83.1%). Technical challenges reported involved poor or restricted access to videos in certain areas of the world and occasional problems with lost sessions. The visual analyses of user pathways display the parts of virtual patient scenarios that elicited less interest and may have been perceived as nonchallenging options. Analyzing the user navigation pathways allowed us to detect indications of both surface and deep approaches to the content material among the MOOC participants. Conclusions This study reported on first inclusion of virtual patients in a MOOC. It adds to the body of knowledge by demonstrating how a biomedical cloud provider service can ensure technical capacity and flexible design of a virtual patient platform on a massive scale. The study also presents a new way of analyzing the use of branched virtual patients by visualization of user navigation pathways. Suggestions are offered on improvements to the design of virtual patients in MOOCs. PMID:27731844
Parmodel: a web server for automated comparative modeling of proteins.
Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira
2004-12-24
Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .
Scalable Multi-Platform Distribution of Spatial 3d Contents
NASA Astrophysics Data System (ADS)
Klimke, J.; Hagedorn, B.; Döllner, J.
2013-09-01
Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.
iDBPs: a web server for the identification of DNA binding proteins.
Nimrod, Guy; Schushan, Maya; Szilágyi, András; Leslie, Christina; Ben-Tal, Nir
2010-03-01
The iDBPs server uses the three-dimensional (3D) structure of a query protein to predict whether it binds DNA. First, the algorithm predicts the functional region of the protein based on its evolutionary profile; the assumption is that large clusters of conserved residues are good markers of functional regions. Next, various characteristics of the predicted functional region as well as global features of the protein are calculated, such as the average surface electrostatic potential, the dipole moment and cluster-based amino acid conservation patterns. Finally, a random forests classifier is used to predict whether the query protein is likely to bind DNA and to estimate the prediction confidence. We have trained and tested the classifier on various datasets and shown that it outperformed related methods. On a dataset that reflects the fraction of DNA binding proteins (DBPs) in a proteome, the area under the ROC curve was 0.90. The application of the server to an updated version of the N-Func database, which contains proteins of unknown function with solved 3D-structure, suggested new putative DBPs for experimental studies. http://idbps.tau.ac.il/
COGNAT: a web server for comparative analysis of genomic neighborhoods.
Klimchuk, Olesya I; Konovalov, Kirill A; Perekhvatov, Vadim V; Skulachev, Konstantin V; Dibrova, Daria V; Mulkidjanian, Armen Y
2017-11-22
In prokaryotic genomes, functionally coupled genes can be organized in conserved gene clusters enabling their coordinated regulation. Such clusters could contain one or several operons, which are groups of co-transcribed genes. Those genes that evolved from a common ancestral gene by speciation (i.e. orthologs) are expected to have similar genomic neighborhoods in different organisms, whereas those copies of the gene that are responsible for dissimilar functions (i.e. paralogs) could be found in dissimilar genomic contexts. Comparative analysis of genomic neighborhoods facilitates the prediction of co-regulated genes and helps to discern different functions in large protein families. We intended, building on the attribution of gene sequences to the clusters of orthologous groups of proteins (COGs), to provide a method for visualization and comparative analysis of genomic neighborhoods of evolutionary related genes, as well as a respective web server. Here we introduce the COmparative Gene Neighborhoods Analysis Tool (COGNAT), a web server for comparative analysis of genomic neighborhoods. The tool is based on the COG database, as well as the Pfam protein families database. As an example, we show the utility of COGNAT in identifying a new type of membrane protein complex that is formed by paralog(s) of one of the membrane subunits of the NADH:quinone oxidoreductase of type 1 (COG1009) and a cytoplasmic protein of unknown function (COG3002). This article was reviewed by Drs. Igor Zhulin, Uri Gophna and Igor Rogozin.
Virtualization and cloud computing in dentistry.
Chow, Frank; Muftu, Ali; Shorter, Richard
2014-01-01
The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.
Factors to keep in mind when introducing virtual microscopy.
Glatz-Krieger, Katharina; Spornitz, Udo; Spatz, Alain; Mihatsch, Michael J; Glatz, Dieter
2006-03-01
Digitization of glass slides and delivery of so-called virtual slides (VS) emulating a real microscope over the Internet have become reality due to recent improvements in technology. We have implemented a virtual microscope for instruction of medical students and for continuing medical education. Up to 30,000 images per slide are captured using a microscope with an automated stage. The images are post-processed and then served by a plain hypertext transfer protocol (http)-server. A virtual slide client (vMic) based on Macromedia's Flash MX, a highly accepted technology available on every modern Web browser, has been developed. All necessary virtual slide parameters are stored in an XML file together with the image. Evaluation of the courses by questionnaire indicated that most students and many but not all pathologists regard virtual slides as an adequate replacement for traditional slides. All our virtual slides are publicly accessible over the World Wide Web (WWW) at http://vmic.unibas.ch . Recently, several commercially available virtual slide acquisition systems (VSAS) have been developed that use various technologies to acquire and distribute virtual slides. These systems differ in speed, image quality, compatibility, viewer functionalities and price. This paper gives an overview of the factors to keep in mind when introducing virtual microscopy.
Design and implementation of an internet-based electrical engineering laboratory.
He, Zhenlei; Shen, Zhangbiao; Zhu, Shanan
2014-09-01
This paper describes an internet-based electrical engineering laboratory (IEE-Lab) with virtual and physical experiments at Zhejiang University. In order to synthesize the advantages of both experiment styles, the IEE-Lab is come up with Client/Server/Application framework and combines the virtual and physical experiments. The design and workflow of IEE-Lab are introduced. The analog electronic experiment is taken as an example to show Flex plug-in design, data communication based on XML (Extensible Markup Language), experiment simulation modeled by Modelica and control terminals' design. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Security model for VM in cloud
NASA Astrophysics Data System (ADS)
Kanaparti, Venkataramana; Naveen K., R.; Rajani, S.; Padmvathamma, M.; Anitha, C.
2013-03-01
Cloud computing is a new approach emerged to meet ever-increasing demand for computing resources and to reduce operational costs and Capital Expenditure for IT services. As this new way of computation allows data and applications to be stored away from own corporate server, it brings more issues in security such as virtualization security, distributed computing, application security, identity management, access control and authentication. Even though Virtualization forms the basis for cloud computing it poses many threats in securing cloud. As most of Security threats lies at Virtualization layer in cloud we proposed this new Security Model for Virtual Machine in Cloud (SMVC) in which every process is authenticated by Trusted-Agent (TA) in Hypervisor as well as in VM. Our proposed model is designed to with-stand attacks by unauthorized process that pose threat to applications related to Data Mining, OLAP systems, Image processing which requires huge resources in cloud deployed on one or more VM's.
Reiling, Denise M; Nusbaumer, Michael R
2007-12-01
Much has been written about the impact of the presence of a designated driver on patrons' consumption, but heretofore, its impact on the behaviour of the server has been virtually ignored. The goal of this paper, then, was to explore the potential impact of the presence of a designated driver on alcoholic beverage servers' self-reported willingness to knowingly serve an already intoxicated customer. chi(2) analysis of survey data collected from 938 licensed servers, in the state of Indiana, USA, was performed. Approximately 43% of the bartenders surveyed reported that they either would be or might be willing to over-serve an already intoxicated customer. Of those who answered the follow-up question as to under what conditions they would be willing to over-serve, almost 80% reported that they would do so if the patron were accompanied by a designated driver. The statistical significance of the relationship between these two variables (.000) raises the question of whether the Designated Driver Campaign has the latent function of enabling some servers to neutralize their responsibility for over-serving by disregarding other types of intoxication-related harm.
Extending the Virtual Solar Observatory (VSO) to Incorporate Data Analysis Capabilities (III)
NASA Astrophysics Data System (ADS)
Csillaghy, A.; Etesi, L.; Dennis, B.; Zarro, D.; Schwartz, R.; Tolbert, K.
2008-12-01
We will present a progress report on our activities to extend the data analysis capabilities of the VSO. Our efforts to date have focused on three areas: 1. Extending the data retrieval capabilities by developing a centralized data processing server. The server is built with Java, IDL (Interactive Data Language), and the SSW (Solar SoftWare) package with all SSW-related instrument libraries and required calibration data. When a user requests VSO data that requires preprocessing, the data are transparently sent to the server, processed, and returned to the user's IDL session for viewing and analysis. It is possible to have any Java or IDL client connect to the server. An IDL prototype for preparing and calibrating SOHO/EIT data wll be demonstrated. 2. Improving the solar data search in SHOW SYNOP, a graphical user tool connected to VSO in IDL. We introduce the Java-IDL interface that allows a flexible dynamic, and extendable way of searching the VSO, where all the communication with VSO are managed dynamically by standard Java tools. 3. Improving image overlay capability to support coregistration of solar disk observations obtained from different orbital view angles, position angles, and distances - such as from the twin STEREO spacecraft.
Quality of service policy control in virtual private networks
NASA Astrophysics Data System (ADS)
Yu, Yiqing; Wang, Hongbin; Zhou, Zhi; Zhou, Dongru
2004-04-01
This paper studies the QoS of VPN in an environment where the public network prices connection-oriented services based on source, destination and grade of service, and advertises these prices to its VPN customers (users). As different QoS technologies can produce different QoS, there are according different traffic classification rules and priority rules. The internet service provider (ISP) may need to build complex mechanisms separately for each node. In order to reduce the burden of network configuration, we need to design policy control technologies. We considers mainly directory server, policy server, policy manager and policy enforcers. Policy decision point (PDP) decide its control according to policy rules. In network, policy enforce point (PEP) decide its network controlled unit. For InterServ and DiffServ, we will adopt different policy control methods as following: (1) In InterServ, traffic uses resource reservation protocol (RSVP) to guarantee the network resource. (2) In DiffServ, policy server controls the DiffServ code points and per hop behavior (PHB), its PDP distributes information to each network node. Policy server will function as following: information searching; decision mechanism; decision delivering; auto-configuration. In order to prove the effectiveness of QoS policy control, we make the corrective simulation.
AGGRESCAN3D (A3D): server for prediction of aggregation properties of protein structures.
Zambrano, Rafael; Jamroz, Michal; Szczasiuk, Agata; Pujols, Jordi; Kmiecik, Sebastian; Ventura, Salvador
2015-07-01
Protein aggregation underlies an increasing number of disorders and constitutes a major bottleneck in the development of therapeutic proteins. Our present understanding on the molecular determinants of protein aggregation has crystalized in a series of predictive algorithms to identify aggregation-prone sites. A majority of these methods rely only on sequence. Therefore, they find difficulties to predict the aggregation properties of folded globular proteins, where aggregation-prone sites are often not contiguous in sequence or buried inside the native structure. The AGGRESCAN3D (A3D) server overcomes these limitations by taking into account the protein structure and the experimental aggregation propensity scale from the well-established AGGRESCAN method. Using the A3D server, the identified aggregation-prone residues can be virtually mutated to design variants with increased solubility, or to test the impact of pathogenic mutations. Additionally, A3D server enables to take into account the dynamic fluctuations of protein structure in solution, which may influence aggregation propensity. This is possible in A3D Dynamic Mode that exploits the CABS-flex approach for the fast simulations of flexibility of globular proteins. The A3D server can be accessed at http://biocomp.chem.uw.edu.pl/A3D/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Homesteading on the Web: The Queensland Department of Education Virtual Library.
ERIC Educational Resources Information Center
Cram, Jennifer; Allison, Myrl
1996-01-01
The Queensland Department of Education (Australia) developed a homesteading model as an alternative to the urban-built environment model of large multi-purpose networks. This resulted in the in-house development of a low-cost, stand-alone server and homepage. The charette technique was used to plan and design the Queensland Department of Education…
www.teld.net: Online Courseware Engine for Teaching by Examples and Learning by Doing.
ERIC Educational Resources Information Center
Huang, G. Q.; Shen, B.; Mak, K. L.
2001-01-01
Describes TELD (Teaching by Examples and Learning by Doing), a Web-based online courseware engine for higher education. Topics include problem-based learning; project-based learning; case methods; TELD as a Web server; course materials; TELD as a search engine; and TELD as an online virtual classroom for electronic delivery of electronic…
Grow Your Green Campus Organically!
ERIC Educational Resources Information Center
Schaffhauser, Dian
2009-01-01
When it comes to environmental savvy, Delta College (Michigan) is like a lot of small institutions of higher education: It has a passel of green efforts underway, which could fall under all sorts of headings. The IT organization at the 11,000-student community college campus has virtualized its server operations and is on track to roll out a…
SciServer: An Online Collaborative Environment for Big Data in Research and Education
NASA Astrophysics Data System (ADS)
Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr
2017-01-01
For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast cross-matching between very large astronomical datasets.Demos, documentation, and more information about all these resources can be found at www.sciserver.org.
Data Publishing and Sharing Via the THREDDS Data Repository
NASA Astrophysics Data System (ADS)
Wilson, A.; Caron, J.; Davis, E.; Baltzer, T.
2007-12-01
The terms "Team Science" and "Networked Science" have been coined to describe a virtual organization of researchers tied via some intellectual challenge, but often located in different organizations and locations. A critical component to these endeavors is publishing and sharing of content, including scientific data. Imagine pointing your web browser to a web page that interactively lets you upload data and metadata to a repository residing on a remote server, which can then be accessed by others in a secure fasion via the web. While any content can be added to this repository, it is designed particularly for storing and sharing scientific data and metadata. Server support includes uploading of data files that can subsequently be subsetted, aggregrated, and served in NetCDF or other scientific data formats. Metadata can be associated with the data and interactively edited. The THREDDS Data Repository (TDR) is a server that provides client initiated, on demand, location transparent storage for data of any type that can then be served by the THREDDS Data Server (TDS). The TDR provides functionality to: * securely store and "own" data files and associated metadata * upload files via HTTP and gridftp * upload a collection of data as single file * modify and restructure repository contents * incorporate metadata provided by the user * generate additional metadata programmatically * edit individual metadata elements The TDR can exist separately from a TDS, serving content via HTTP. Also, it can work in conjunction with the TDS, which includes functionality to provide: * access to data in a variety of formats via -- OPeNDAP -- OGC Web Coverage Service (for gridded datasets) -- bulk HTTP file transfer * a NetCDF view of datasets in NetCDF, OPeNDAP, HDF-5, GRIB, and NEXRAD formats * serving of very large volume datasets, such as NEXRAD radar * aggregation into virtual datasets * subsetting via OPeNDAP and NetCDF Subsetting services This talk will discuss TDR/TDS capabilities as well as how users can install this software to create their own repositories.
System-Level Virtualization for High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian
2008-01-01
System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less
Combining Quick-Turnaround and Batch Workloads at Scale
NASA Technical Reports Server (NTRS)
Matthews, Gregory A.
2012-01-01
NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.
Back to the future: virtualization of the computing environment at the W. M. Keck Observatory
NASA Astrophysics Data System (ADS)
McCann, Kevin L.; Birch, Denny A.; Holt, Jennifer M.; Randolph, William B.; Ward, Josephine A.
2014-07-01
Over its two decades of science operations, the W.M. Keck Observatory computing environment has evolved to contain a distributed hybrid mix of hundreds of servers, desktops and laptops of multiple different hardware platforms, O/S versions and vintages. Supporting the growing computing capabilities to meet the observatory's diverse, evolving computing demands within fixed budget constraints, presents many challenges. This paper describes the significant role that virtualization is playing in addressing these challenges while improving the level and quality of service as well as realizing significant savings across many cost areas. Starting in December 2012, the observatory embarked on an ambitious plan to incrementally test and deploy a migration to virtualized platforms to address a broad range of specific opportunities. Implementation to date has been surprisingly glitch free, progressing well and yielding tangible benefits much faster than many expected. We describe here the general approach, starting with the initial identification of some low hanging fruit which also provided opportunity to gain experience and build confidence among both the implementation team and the user community. We describe the range of challenges, opportunities and cost savings potential. Very significant among these was the substantial power savings which resulted in strong broad support for moving forward. We go on to describe the phasing plan, the evolving scalable architecture, some of the specific technical choices, as well as some of the individual technical issues encountered along the way. The phased implementation spans Windows and Unix servers for scientific, engineering and business operations, virtualized desktops for typical office users as well as more the more demanding graphics intensive CAD users. Other areas discussed in this paper include staff training, load balancing, redundancy, scalability, remote access, disaster readiness and recovery.
The ClusPro web server for protein-protein docking
Kozakov, Dima; Hall, David R.; Xia, Bing; Porter, Kathryn A.; Padhorny, Dzmitry; Yueh, Christine; Beglov, Dmitri; Vajda, Sandor
2017-01-01
The ClusPro server (https://cluspro.org) is a widely used tool for protein-protein docking. The server provides a simple home page for basic use, requiring only two files in Protein Data Bank format. However, ClusPro also offers a number of advanced options to modify the search that include the removal of unstructured protein regions, applying attraction or repulsion, accounting for pairwise distance restraints, constructing homo-multimers, considering small angle X-ray scattering (SAXS) data, and finding heparin binding sites. Six different energy functions can be used depending on the type of proteins. Docking with each energy parameter set results in ten models defined by centers of highly populated clusters of low energy docked structures. This protocol describes the use of the various options, the construction of auxiliary restraints files, the selection of the energy parameters, and the analysis of the results. Although the server is heavily used, runs are generally completed in < 4 hours. PMID:28079879
A digital atlas of breast histopathology: an application of web based virtual microscopy
Lundin, M; Lundin, J; Helin, H; Isola, J
2004-01-01
Aims: To develop an educationally useful atlas of breast histopathology, using advanced web based virtual microscopy technology. Methods: By using a robotic microscope and software adopted and modified from the aerial and satellite imaging industry, a virtual microscopy system was developed that allows fully automated slide scanning and image distribution via the internet. More than 150 slides were scanned at high resolution with an oil immersion ×40 objective (numerical aperture, 1.3) and archived on an image server residing in a high speed university network. Results: A publicly available website was constructed, http://www.webmicroscope.net/breastatlas, which features a comprehensive virtual slide atlas of breast histopathology according to the World Health Organisation 2003 classification. Users can view any part of an entire specimen at any magnification within a standard web browser. The virtual slides are supplemented with concise textual descriptions, but can also be viewed without diagnostic information for self assessment of histopathology skills. Conclusions: Using the technology described here, it is feasible to develop clinically and educationally useful virtual microscopy applications. Web based virtual microscopy will probably become widely used at all levels in pathology teaching. PMID:15563669
A quantitative model of application slow-down in multi-resource shared systems
Lim, Seung-Hwan; Kim, Youngjae
2016-12-26
Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less
A quantitative model of application slow-down in multi-resource shared systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Kim, Youngjae
Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less
Hierarchical video summarization based on context clustering
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Smith, John R.
2003-11-01
A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.
Development of a teledermatopathology consultation system using virtual slides
2012-01-01
Background An online consultation system using virtual slides (whole slide images; WSI) has been developed for pathological diagnosis, and could help compensate for the shortage of pathologists, especially in the field of dermatopathology and in other fields dealing with difficult cases. This study focused on the performance and future potential of the system. Method In our system, histological specimens on slide glasses are digitalized by a virtual slide instrument, converted into web data, and up-loaded to an open server. Using our own purpose-built online system, we then input patient details such as age, gender, affected region, clinical data, past history and other related items. We next select up to ten consultants. Finally we send an e-mail to all consultants simultaneously through a single command. The consultant receives an e-mail containing an ID and password which is used to access the open server and inspect the images and other data associated with the case. The consultant makes a diagnosis, which is sent to us along with comments. Because this was a pilot study, we also conducted several questionnaires with consultants concerning the quality of images, operability, usability, and other issues. Results We solicited consultations for 36 cases, including cases of tumor, and involving one to eight consultants in the field of dermatopathology. No problems were noted concerning the images or the functioning of the system on the sender or receiver sides. The quickest diagnosis was received only 18 minutes after sending our data. This is much faster than in conventional consultation using glass slides. There were no major problems relating to the diagnosis, although there were some minor differences of opinion between consultants. The results of questionnaires answered by many consultants confirmed the usability of this system for pathological consultation. (16 out of 23 consultants.) Conclusion We have developed a novel teledermatopathological consultation system using virtual slides, and investigated the usefulness of the system. The results demonstrate that our system can be a useful tool for international medical work, and we anticipate its wider application in the future. Virtual slides The virtual slides for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1902376044831574 PMID:23237667
Predictors of Interpersonal Trust in Virtual Distributed Teams
2008-09-01
understand systems that are very complex in nature . Such understanding is essential to facilitate building or maintaining operators’ mental models of the...a significant impact on overall system performance. Specifically, the level of automation that combined human generation of options with computer...and/or computer servers had a significant impact on automated system performance. Additionally, Parasuraman, Sheridan, & Wickens (2000) proposed
Using Virtual Servers to Teach the Implementation of Enterprise-Level DBMSs: A Teaching Note
ERIC Educational Resources Information Center
Wagner, William P.; Pant, Vik
2010-01-01
One of the areas where demand has remained strong for MIS students is in the area of database management. Since the early days, this topic has been a mainstay in the MIS curriculum. Students of database management today typically learn about relational databases, SQL, normalization, and how to design and implement various kinds of database…
OASIS: a data and software distribution service for Open Science Grid
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; De Stefano, J.; Hover, J.; Quick, R.; Teige, S.
2014-06-01
The Open Science Grid encourages the concept of software portability: a user's scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.
WriteShield: A Pseudo Thin Client for Prevention of Information Leakage
NASA Astrophysics Data System (ADS)
Kirihata, Yasuhiro; Sameshima, Yoshiki; Onoyama, Takashi; Komoda, Norihisa
While thin-client systems are diffusing as an effective security method in enterprises and organizations, there is a new approach called pseudo thin-client system. In this system, local disks of clients are write-protected and user data is forced to save on the central file server to realize the same security effect of conventional thin-client systems. Since it takes purely the software-based simple approach, it does not require the hardware enhancement of network and servers to reduce the installation cost. However there are several problems such as no write control to external media, memory depletion possibility, and lower security because of the exceptional write permission to the system processes. In this paper, we propose WriteShield, a pseudo thin-client system which solves these issues. In this system, the local disks are write-protected with volume filter driver and it has a virtual cache mechanism to extend the memory cache size for the write protection. This paper presents design and implementation details of WriteShield. Besides we describe the security analysis and simulation evaluation of paging algorithms for virtual cache mechanism and measure the disk I/O performance to verify its feasibility in the actual environment.
Template based protein structure modeling by global optimization in CASP11.
Joo, Keehyoung; Joung, InSuk; Lee, Sun Young; Kim, Jong Yun; Cheng, Qianyi; Manavalan, Balachandran; Joung, Jong Young; Heo, Seungryong; Lee, Juyong; Nam, Mikyung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-based modeling (TBM) of CASP11 targets, we have developed three new protein modeling protocols (nns for server prediction and LEE and LEER for human prediction) by improving upon our previous CASP protocols (CASP7 through CASP10). We applied the powerful global optimization method of conformational space annealing to three stages of optimization, including multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain remodeling. For more successful fold recognition, a new alignment method called CRFalign was developed. It can incorporate sensitive positional and environmental dependence in alignment scores as well as strong nonlinear correlations among various features. Modifications and adjustments were made to the form of the energy function and weight parameters pertaining to the chain building procedure. For the side-chain remodeling step, residue-type dependence was introduced to the cutoff value that determines the entry of a rotamer to the side-chain modeling library. The improved performance of the nns server method is attributed to successful fold recognition achieved by combining several methods including CRFalign and to the current modeling formulation that can incorporate native-like structural aspects present in multiple templates. The LEE protocol is identical to the nns one except that CASP11-released server models are used as templates. The success of LEE in utilizing CASP11 server models indicates that proper template screening and template clustering assisted by appropriate cluster ranking promises a new direction to enhance protein 3D modeling. Proteins 2016; 84(Suppl 1):221-232. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
iDBPs: a web server for the identification of DNA binding proteins
Nimrod, Guy; Schushan, Maya; Szilágyi, András; Leslie, Christina; Ben-Tal, Nir
2010-01-01
Summary: The iDBPs server uses the three-dimensional (3D) structure of a query protein to predict whether it binds DNA. First, the algorithm predicts the functional region of the protein based on its evolutionary profile; the assumption is that large clusters of conserved residues are good markers of functional regions. Next, various characteristics of the predicted functional region as well as global features of the protein are calculated, such as the average surface electrostatic potential, the dipole moment and cluster-based amino acid conservation patterns. Finally, a random forests classifier is used to predict whether the query protein is likely to bind DNA and to estimate the prediction confidence. We have trained and tested the classifier on various datasets and shown that it outperformed related methods. On a dataset that reflects the fraction of DNA binding proteins (DBPs) in a proteome, the area under the ROC curve was 0.90. The application of the server to an updated version of the N-Func database, which contains proteins of unknown function with solved 3D-structure, suggested new putative DBPs for experimental studies. Availability: http://idbps.tau.ac.il/ Contact: NirB@tauex.tau.ac.il Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20089514
Data grid: a distributed solution to PACS
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyan; Zhang, Jianguo
2004-04-01
In a hospital, various kinds of medical images acquired from different modalities are generally used and stored in different department and each modality usually attaches several workstations to display or process images. To do better diagnosis, radiologists or physicians often need to retrieve other kinds of images for reference. The traditional image storage solution is to buildup a large-scale PACS archive server. However, the disadvantages of pure centralized management of PACS archive server are obvious. Besides high costs, any failure of PACS archive server would cripple the entire PACS operation. Here we present a new approach to develop the storage grid in PACS, which can provide more reliable image storage and more efficient query/retrieval for the whole hospital applications. In this paper, we also give the performance evaluation by comparing the three popular technologies mirror, cluster and grid.
Lehmann, Eldon D.; DeWolf, Dennis K.; Novotny, Christopher A.; Reed, Karen; Gotwals, Robert R.
2014-01-01
Background. AIDA is a widely available downloadable educational simulator of glucose-insulin interaction in diabetes. Methods. A web-based version of AIDA was developed that utilises a server-based architecture with HTML FORM commands to submit numerical data from a web-browser client to a remote web server. AIDA online, located on a remote server, passes the received data through Perl scripts which interactively produce 24 hr insulin and glucose simulations. Results. AIDA online allows users to modify the insulin regimen and diet of 40 different prestored “virtual diabetic patients” on the internet or create new “patients” with user-generated regimens. Multiple simulations can be run, with graphical results viewed via a standard web-browser window. To date, over 637,500 diabetes simulations have been run at AIDA online, from all over the world. Conclusions. AIDA online's functionality is similar to the downloadable AIDA program, but the mode of implementation and usage is different. An advantage to utilising a server-based application is the flexibility that can be offered. New modules can be added quickly to the online simulator. This has facilitated the development of refinements to AIDA online, which have instantaneously become available around the world, with no further local downloads or installations being required. PMID:24511312
Lehmann, Eldon D; Dewolf, Dennis K; Novotny, Christopher A; Reed, Karen; Gotwals, Robert R
2014-01-01
Background. AIDA is a widely available downloadable educational simulator of glucose-insulin interaction in diabetes. Methods. A web-based version of AIDA was developed that utilises a server-based architecture with HTML FORM commands to submit numerical data from a web-browser client to a remote web server. AIDA online, located on a remote server, passes the received data through Perl scripts which interactively produce 24 hr insulin and glucose simulations. Results. AIDA online allows users to modify the insulin regimen and diet of 40 different prestored "virtual diabetic patients" on the internet or create new "patients" with user-generated regimens. Multiple simulations can be run, with graphical results viewed via a standard web-browser window. To date, over 637,500 diabetes simulations have been run at AIDA online, from all over the world. Conclusions. AIDA online's functionality is similar to the downloadable AIDA program, but the mode of implementation and usage is different. An advantage to utilising a server-based application is the flexibility that can be offered. New modules can be added quickly to the online simulator. This has facilitated the development of refinements to AIDA online, which have instantaneously become available around the world, with no further local downloads or installations being required.
Collaboration in a Wireless Grid Innovation Testbed by Virtual Consortium
NASA Astrophysics Data System (ADS)
Treglia, Joseph; Ramnarine-Rieks, Angela; McKnight, Lee
This paper describes the formation of the Wireless Grid Innovation Testbed (WGiT) coordinated by a virtual consortium involving academic and non-academic entities. Syracuse University and Virginia Tech are primary university partners with several other academic, government, and corporate partners. Objectives include: 1) coordinating knowledge sharing, 2) defining key parameters for wireless grids network applications, 3) dynamically connecting wired and wireless devices, content and users, 4) linking to VT-CORNET, Virginia Tech Cognitive Radio Network Testbed, 5) forming ad hoc networks or grids of mobile and fixed devices without a dedicated server, 6) deepening understanding of wireless grid application, device, network, user and market behavior through academic, trade and popular publications including online media, 7) identifying policy that may enable evaluated innovations to enter US and international markets and 8) implementation and evaluation of the international virtual collaborative process.
Real-time, rapidly updating severe weather products for virtual globes
NASA Astrophysics Data System (ADS)
Smith, Travis M.; Lakshmanan, Valliappa
2011-01-01
It is critical that weather forecasters are able to put severe weather information from a variety of observational and modeling platforms into a geographic context so that warning information can be effectively conveyed to the public, emergency managers, and disaster response teams. The availability of standards for the specification and transport of virtual globe data products has made it possible to generate spatially precise, geo-referenced images and to distribute these centrally created products via a web server to a wide audience. In this paper, we describe the data and methods for enabling severe weather threat analysis information inside a KML framework. The method of creating severe weather diagnosis products that are generated and translating them to KML and image files is described. We illustrate some of the practical applications of these data when they are integrated into a virtual globe display. The availability of standards for interoperable virtual globe clients has not completely alleviated the need for custom solutions. We conclude by pointing out several of the limitations of the general-purpose virtual globe clients currently available.
RSAT 2018: regulatory sequence analysis tools 20th anniversary.
Nguyen, Nga Thi Thuy; Contreras-Moreira, Bruno; Castro-Mondragon, Jaime A; Santana-Garcia, Walter; Ossio, Raul; Robles-Espinoza, Carla Daniela; Bahin, Mathieu; Collombet, Samuel; Vincens, Pierre; Thieffry, Denis; van Helden, Jacques; Medina-Rivera, Alejandra; Thomas-Chollier, Morgane
2018-05-02
RSAT (Regulatory Sequence Analysis Tools) is a suite of modular tools for the detection and the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, including from genome-wide datasets like ChIP-seq/ATAC-seq, (ii) motif scanning, (iii) motif analysis (quality assessment, comparisons and clustering), (iv) analysis of regulatory variations, (v) comparative genomics. Six public servers jointly support 10 000 genomes from all kingdoms. Six novel or refactored programs have been added since the 2015 NAR Web Software Issue, including updated programs to analyse regulatory variants (retrieve-variation-seq, variation-scan, convert-variations), along with tools to extract sequences from a list of coordinates (retrieve-seq-bed), to select motifs from motif collections (retrieve-matrix), and to extract orthologs based on Ensembl Compara (get-orthologs-compara). Three use cases illustrate the integration of new and refactored tools to the suite. This Anniversary update gives a 20-year perspective on the software suite. RSAT is well-documented and available through Web sites, SOAP/WSDL (Simple Object Access Protocol/Web Services Description Language) web services, virtual machines and stand-alone programs at http://www.rsat.eu/.
HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters
NASA Astrophysics Data System (ADS)
Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge
2015-12-01
In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.
Analysis of Cloud-Based Database Systems
2015-06-01
EU) citizens under the Patriot Act [3]. Unforeseen virtualization bugs have caused wide-reaching outages [4], leaving customers helpless to assist...collected from SQL Server Profiler traces. We analyze the trace results captured from our test bed both before and after increasing system resources...cloud test- bed . A. DATA COLLECTION, PARSING, AND ORGANIZATION Once we finished collecting the trace data, we knew we needed to have as close a
Effectiveness of Telerehabilitation for OIF/OEF Returnees with Combat Related Trauma
2015-02-01
Our telerehabilitation care coordination team is organized under Steve Scott, MD, Chief Physical Medicine and Rehabilitation Services VA at the...for communication between care coordinators and study enrollees. Separate “ virtual rooms” have been setup on the 5 VA server to facilitate care...characterize rehabilitation trajectories over time in the areas of function, cognition, psychosocial adjustment, integration into society and mental health
A web server for analysis, comparison and prediction of protein ligand binding sites.
Singh, Harinder; Srivastava, Hemant Kumar; Raghava, Gajendra P S
2016-03-25
One of the major challenges in the field of system biology is to understand the interaction between a wide range of proteins and ligands. In the past, methods have been developed for predicting binding sites in a protein for a limited number of ligands. In order to address this problem, we developed a web server named 'LPIcom' to facilitate users in understanding protein-ligand interaction. Analysis, comparison and prediction modules are available in the "LPIcom' server to predict protein-ligand interacting residues for 824 ligands. Each ligand must have at least 30 protein binding sites in PDB. Analysis module of the server can identify residues preferred in interaction and binding motif for a given ligand; for example residues glycine, lysine and arginine are preferred in ATP binding sites. Comparison module of the server allows comparing protein-binding sites of multiple ligands to understand the similarity between ligands based on their binding site. This module indicates that ATP, ADP and GTP ligands are in the same cluster and thus their binding sites or interacting residues exhibit a high level of similarity. Propensity-based prediction module has been developed for predicting ligand-interacting residues in a protein for more than 800 ligands. In addition, a number of web-based tools have been integrated to facilitate users in creating web logo and two-sample between ligand interacting and non-interacting residues. In summary, this manuscript presents a web-server for analysis of ligand interacting residue. This server is available for public use from URL http://crdd.osdd.net/raghava/lpicom .
Astronomical virtual observatory and the place and role of Bulgarian one
NASA Astrophysics Data System (ADS)
Petrov, Georgi; Dechev, Momchil; Slavcheva-Mihova, Luba; Duchlev, Peter; Mihov, Bojko; Kochev, Valentin; Bachev, Rumen
2009-07-01
Virtual observatory could be defined as a collection of integrated astronomical data archives and software tools that utilize computer networks to create an environment in which research can be conducted. Several countries have initiated national virtual observatory programs that combine existing databases from ground-based and orbiting observatories, scientific facility especially equipped to detect and record naturally occurring scientific phenomena. As a result, data from all the world's major observatories will be available to all users and to the public. This is significant not only because of the immense volume of astronomical data but also because the data on stars and galaxies has been compiled from observations in a variety of wavelengths-optical, radio, infrared, gamma ray, X-ray and more. In a virtual observatory environment, all of this data is integrated so that it can be synthesized and used in a given study. During the autumn of the 2001 (26.09.2001) six organizations from Europe put the establishment of the Astronomical Virtual Observatory (AVO)-ESO, ESA, Astrogrid, CDS, CNRS, Jodrell Bank (Dolensky et al., 2003). Its aims have been outlined as follows: - To provide comparative analysis of large sets of multiwavelength data; - To reuse data collected by a single source; - To provide uniform access to data; - To make data available to less-advantaged communities; - To be an educational tool. The Virtual observatory includes: - Tools that make it easy to locate and retrieve data from catalogues, archives, and databases worldwide; - Tools for data analysis, simulation, and visualization; - Tools to compare observations with results obtained from models, simulations and theory; - Interoperability: services that can be used regardless of the clients computing platform, operating system and software capabilities; - Access to data in near real-time, archived data and historical data; - Additional information - documentation, user-guides, reports, publications, news and so on. This large growth of astronomical data and the necessity of an easy access to those data led to the foundation of the International Virtual Observatory Alliance (IVOA). IVOA was formed in June 2002. By January 2005, the IVOA has grown to include 15 funded VO projects from Australia, Canada, China, Europe, France, Germany, Hungary, India, Italy, Japan, Korea, Russia, Spain, the United Kingdom, and the United States. At the time being Bulgaria is not a member of European Astronomical Virtual Observatory and as the Bulgarian Virtual Observatory is not a legal entity, we are not members of IVOA. The main purpose of the project is Bulgarian Virtual Observatory to join the leading virtual astronomical institutions in the world. Initially the Bulgarian Virtual Observatory will include: - BG Galaxian virtual observatory; - BG Solar virtual observatory; - Department Star clusters of IA, BAS; - WFPDB group of IA, BAS. All available data will be integrated in the Bulgarian centers of astronomical data, conducted by the Wide Field Plate Archive data centre. For the above purpose POSTGRESQL or/and MySQL will be installed on the server of BG-VO and SAADA tools, ESO-MEX or/and DAL ToolKit to transform our FITS files in standard format for VO-tools. A part of the participants was acquainted with the principles of these products during the "Days of virtual observatory in Sofia" January, 2008.
Goldszal, A F; Brown, G K; McDonald, H J; Vucich, J J; Staab, E V
2001-06-01
In this work, we describe the digital imaging network (DIN), picture archival and communication system (PACS), and radiology information system (RIS) currently being implemented at the Clinical Center, National Institutes of Health (NIH). These systems are presently in clinical operation. The DIN is a redundant meshed network designed to address gigabit density and expected high bandwidth requirements for image transfer and server aggregation. The PACS projected workload is 5.0 TB of new imaging data per year. Its architecture consists of a central, high-throughput Digital Imaging and Communications in Medicine (DICOM) data repository and distributed redundant array of inexpensive disks (RAID) servers employing fiber-channel technology for immediate delivery of imaging data. On demand distribution of images and reports to clinicians and researchers is accomplished via a clustered web server. The RIS follows a client-server model and provides tools to order exams, schedule resources, retrieve and review results, and generate management reports. The RIS-hospital information system (HIS) interfaces include admissions, discharges, and transfers (ATDs)/demographics, orders, appointment notifications, doctors update, and results.
Owgis 2.0: Open Source Java Application that Builds Web GIS Interfaces for Desktop Andmobile Devices
NASA Astrophysics Data System (ADS)
Zavala Romero, O.; Chassignet, E.; Zavala-Hidalgo, J.; Pandav, H.; Velissariou, P.; Meyer-Baese, A.
2016-12-01
OWGIS is an open source Java and JavaScript application that builds easily configurable Web GIS sites for desktop and mobile devices. The current version of OWGIS generates mobile interfaces based on HTML5 technology and can be used to create mobile applications. The style of the generated websites can be modified using COMPASS, a well known CSS Authoring Framework. In addition, OWGIS uses several Open Geospatial Consortium standards to request datafrom the most common map servers, such as GeoServer. It is also able to request data from ncWMS servers, allowing the websites to display 4D data from NetCDF files. This application is configured by XML files that define which layers, geographic datasets, are displayed on the Web GIS sites. Among other features, OWGIS allows for animations; streamlines from vector data; virtual globe display; vertical profiles and vertical transects; different color palettes; the ability to download data; and display text in multiple languages. OWGIS users are mainly scientists in the oceanography, meteorology and climate fields.
DiRE: identifying distant regulatory elements of co-expressed genes
Gotea, Valer; Ovcharenko, Ivan
2008-01-01
Regulation of gene expression in eukaryotic genomes is established through a complex cooperative activity of proximal promoters and distant regulatory elements (REs) such as enhancers, repressors and silencers. We have developed a web server named DiRE, based on the Enhancer Identification (EI) method, for predicting distant regulatory elements in higher eukaryotic genomes, namely for determining their chromosomal location and functional characteristics. The server uses gene co-expression data, comparative genomics and profiles of transcription factor binding sites (TFBSs) to determine TFBS-association signatures that can be used for discriminating specific regulatory functions. DiRE's unique feature is its ability to detect REs outside of proximal promoter regions, as it takes advantage of the full gene locus to conduct the search. DiRE can predict common REs for any set of input genes for which the user has prior knowledge of co-expression, co-function or other biologically meaningful grouping. The server predicts function-specific REs consisting of clusters of specifically-associated TFBSs and it also scores the association of individual transcription factors (TFs) with the biological function shared by the group of input genes. Its integration with the Array2BIO server allows users to start their analysis with raw microarray expression data. The DiRE web server is freely available at http://dire.dcode.org. PMID:18487623
Automatic provisioning, deployment and orchestration for load-balancing THREDDS instances
NASA Astrophysics Data System (ADS)
Cofino, A. S.; Fernández-Tejería, S.; Kershaw, P.; Cimadevilla, E.; Petri, R.; Pryor, M.; Stephens, A.; Herrera, S.
2017-12-01
THREDDS is a widely used web server to provide to different scientific communities with data access and discovery. Due to THREDDS's lack of horizontal scalability and automatic configuration management and deployment, this service usually deals with service downtimes and time consuming configuration tasks, mainly when an intensive use is done as is usual within the scientific community (e.g. climate). Instead of the typical installation and configuration of a single or multiple independent THREDDS servers, manually configured, this work presents an automatic provisioning, deployment and orchestration cluster of THREDDS servers. This solution it's based on Ansible playbooks, used to control automatically the deployment and configuration setup on a infrastructure and to manage the datasets available in THREDDS instances. The playbooks are based on modules (or roles) of different backends and frontends load-balancing setups and solutions. The frontend load-balancing system enables horizontal scalability by delegating requests to backend workers, consisting in a variable number of instances for the THREDDS server. This implementation allows to configure different infrastructure and deployment scenario setups, as more workers are easily added to the cluster by simply declaring them as Ansible variables and executing the playbooks, and also provides fault-tolerance and better reliability since if any of the workers fail another instance of the cluster can take over it. In order to test the solution proposed, two real scenarios are analyzed in this contribution: The JASMIN Group Workspaces at CEDA and the User Data Gateway (UDG) at the Data Climate Service from the University of Cantabria. On the one hand, the proposed configuration has provided CEDA with a higher level and more scalable Group Workspaces (GWS) service than the previous one based on Unix permissions, improving also the data discovery and data access experience. On the other hand, the UDG has improved its scalability by allowing requests to be distributed to the backend workers instead of being served by a unique THREDDS worker. As a conclusion the proposed configuration supposes a significant improvement with respect to configurations based on non-collaborative THREDDS' instances.
Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.
Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong
2018-01-01
Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.
Situational Awareness of Network System Roles (SANSR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huffer, Kelly M; Reed, Joel W
In a large enterprise it is difficult for cyber security analysts to know what services and roles every machine on the network is performing (e.g., file server, domain name server, email server). Using network flow data, already collected by most enterprises, we developed a proof-of-concept tool that discovers the roles of a system using both clustering and categorization techniques. The tool's role information would allow cyber analysts to detect consequential changes in the network, initiate incident response plans, and optimize their security posture. The results of this proof-of-concept tool proved to be quite accurate on three real data sets. Wemore » will present the algorithms used in the tool, describe the results of preliminary testing, provide visualizations of the results, and discuss areas for future work. Without this kind of situational awareness, cyber analysts cannot quickly diagnose an attack or prioritize remedial actions.« less
RADER: a RApid DEcoy Retriever to facilitate decoy based assessment of virtual screening.
Wang, Ling; Pang, Xiaoqian; Li, Yecheng; Zhang, Ziying; Tan, Wen
2017-04-15
Evaluation of the capacity for separating actives from challenging decoys is a crucial metric of performance related to molecular docking or a virtual screening workflow. The Directory of Useful Decoys (DUD) and its enhanced version (DUD-E) provide a benchmark for molecular docking, although they only contain a limited set of decoys for limited targets. DecoyFinder was released to compensate the limitations of DUD or DUD-E for building target-specific decoy sets. However, desirable query template design, generation of multiple decoy sets of similar quality, and computational speed remain bottlenecks, particularly when the numbers of queried actives and retrieved decoys increases to hundreds or more. Here, we developed a program suite called RApid DEcoy Retriever (RADER) to facilitate the decoy-based assessment of virtual screening. This program adopts a novel database-management regime that supports rapid and large-scale retrieval of decoys, enables high portability of databases, and provides multifaceted options for designing initial query templates from a large number of active ligands and generating subtle decoy sets. RADER provides two operational modes: as a command-line tool and on a web server. Validation of the performance and efficiency of RADER was also conducted and is described. RADER web server and a local version are freely available at http://rcidm.org/rader/ . lingwang@scut.edu.cn or went@scut.edu.cn . Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun
2012-01-01
Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.
NASA Astrophysics Data System (ADS)
Marcus, Kelvin
2014-06-01
The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua
2014-01-01
This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353
Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua
2014-01-01
This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.
The ALICE Software Release Validation cluster
NASA Astrophysics Data System (ADS)
Berzano, D.; Krzewicki, M.
2015-12-01
One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.
Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing
NASA Astrophysics Data System (ADS)
Tang, Jingyin; Matyas, Corene J.
2018-02-01
Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.
Schrader, T; Hufnagl, P; Schlake, W; Dietel, M
2005-01-01
In the autumn a German screening program was started for detecting breast cancer in the population of women fifty and above. For the first time in this program, quality assurance rules were established: All statements of the radiologists and pathologists have to be confirmed by a second opinion. This improvement in quality is combined with a delay in time and additional expence. A new Telepathology Consultation Service was developed based on the experiences of the Telepathology Consultation Center of the UICC to speed up the second opinion process. The complete web-based service is operated under MS Windows 2003 Server, as web server the Internet Information Server, and the SQL-Server (both Microsoft) as the database. The websites, forms and control mechanism have been coded in by ASP scripts and JavaScript. A study to evaluate the effectiveness of telepathological consultation in comparison to conventional consultation has been carried out. Pathologists of the Professional Association of German Pathologists took part as well as requesting pathologists and as consultants for other participants. The quality of telepathological diagnosis was comparable to the conventional diagnosis. Telepathology allows a faster respond of 1 to 2 day (conventional postal delay). The time to prepare a telepathology request is about twice as conventional. This ratio may be inverted by an interface between the Pathology Information System and the Telepathology Server and the use of virtual microscopy. The Telepathology Consultation Service of the Professional Association of German Pathologists is a fast and effective German-language, internet-based service for obtaining a second opinion.
Computational Prediction of the Immunomodulatory Potential of RNA Sequences.
Nagpal, Gandharva; Chaudhary, Kumardeep; Dhanda, Sandeep Kumar; Raghava, Gajendra Pal Singh
2017-01-01
Advances in the knowledge of various roles played by non-coding RNAs have stimulated the application of RNA molecules as therapeutics. Among these molecules, miRNA, siRNA, and CRISPR-Cas9 associated gRNA have been identified as the most potent RNA molecule classes with diverse therapeutic applications. One of the major limitations of RNA-based therapeutics is immunotoxicity of RNA molecules as it may induce the innate immune system. In contrast, RNA molecules that are potent immunostimulators are strong candidates for use in vaccine adjuvants. Thus, it is important to understand the immunotoxic or immunostimulatory potential of these RNA molecules. The experimental techniques for determining immunostimulatory potential of siRNAs are time- and resource-consuming. To overcome this limitation, recently our group has developed a web-based server "imRNA" for predicting the immunomodulatory potential of RNA sequences. This server integrates a number of modules that allow users to perform various tasks including (1) generation of RNA analogs with reduced immunotoxicity, (2) identification of highly immunostimulatory regions in RNA sequence, and (3) virtual screening. This server may also assist users in the identification of minimum mutations required in a given RNA sequence to minimize its immunomodulatory potential that is required for designing RNA-based therapeutics. Besides, the server can be used for designing RNA-based vaccine adjuvants as it may assist users in the identification of mutations required for increasing immunomodulatory potential of a given RNA sequence. In summary, this chapter describes major applications of the "imRNA" server in designing RNA-based therapeutics and vaccine adjuvants (http://www.imtech.res.in/raghava/imrna/).
The transition to digital media in biocommunications.
Lynch, P J
1996-01-01
As digital audiovisual media become dominant in biomedical communications, the skills of human interface design and the technology of client-server multimedia data networks will underlie and influence virtually every aspect of biocommunications professional practice. The transition to digital communications media will require financial, organizational, and professional changes in current biomedical communications departments, and will require a multi-disciplinary approach that will blur the boundaries of the current biocommunications professions.
2016-03-01
science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata
Real-Time Speaker Detection for User-Device Binding
2010-12-01
31 xi THIS PAGE INTENTIONALLY LEFT BLANK xii CHAPTER 1: Introduction The roll-out of commercial wireless networks continues to rise worldwide...in a secured facility. It could also be connected to the call server via a Virtual Private Network (VPN) or public lines if security is not a top...communications network [25]. Yet, James Arden Barnett, Jr., Chief of the Public Safety and Homeland Security Bureau, argues that emergency communications
Source Code Analysis Laboratory (SCALe)
2012-04-01
Versus Flagged Nonconformities (FNC) Software System TP/FNC Ratio Mozilla Firefox version 2.0 6/12 50% Linux kernel version 2.6.15 10/126 8...is inappropriately tuned for analysis of the Linux kernel, which has anomalous results. Customizing SCALe to work with software for a particular...servers support a collection of virtual machines (VMs) that can be configured to support analysis in various environments, such as Windows XP and Linux . A
Load Balancing in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Zhu, Yingwu
In this chapter we start by addressing the importance and necessity of load balancing in structured P2P networks, due to three main reasons. First, structured P2P networks assume uniform peer capacities while peer capacities are heterogeneous in deployed P2P networks. Second, resorting to pseudo-uniformity of the hash function used to generate node IDs and data item keys leads to imbalanced overlay address space and item distribution. Lastly, placement of data items cannot be randomized in some applications (e.g., range searching). We then present an overview of load aggregation and dissemination techniques that are required by many load balancing algorithms. Two techniques are discussed including tree structure-based approach and gossip-based approach. They make different tradeoffs between estimate/aggregate accuracy and failure resilience. To address the issue of load imbalance, three main solutions are described: virtual server-based approach, power of two choices, and address-space and item balancing. While different in their designs, they all aim to improve balance on the address space and data item distribution. As a case study, the chapter discusses a virtual server-based load balancing algorithm that strives to ensure fair load distribution among nodes and minimize load balancing cost in bandwidth. Finally, the chapter concludes with future research and a summary.
Alexander, Nathan; Woetzel, Nils; Meiler, Jens
2011-02-01
Clustering algorithms are used as data analysis tools in a wide variety of applications in Biology. Clustering has become especially important in protein structure prediction and virtual high throughput screening methods. In protein structure prediction, clustering is used to structure the conformational space of thousands of protein models. In virtual high throughput screening, databases with millions of drug-like molecules are organized by structural similarity, e.g. common scaffolds. The tree-like dendrogram structure obtained from hierarchical clustering can provide a qualitative overview of the results, which is important for focusing detailed analysis. However, in practice it is difficult to relate specific components of the dendrogram directly back to the objects of which it is comprised and to display all desired information within the two dimensions of the dendrogram. The current work presents a hierarchical agglomerative clustering method termed bcl::Cluster. bcl::Cluster utilizes the Pymol Molecular Graphics System to graphically depict dendrograms in three dimensions. This allows simultaneous display of relevant biological molecules as well as additional information about the clusters and the members comprising them.
Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach
Kudisthalert, Wasu
2018-01-01
Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912
Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer
NASA Astrophysics Data System (ADS)
Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien
2016-04-01
Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on https://github.com/planetserver References: Baumann, P., et al. (2015) Big Data Analytics for Earth Sciences: the EarthServer approach, International Journal of Digital Earth, doi: 10.1080/17538947.2014.1003106. Cantini, F. et al. (2014) Geophys. Res. Abs., Vol. 16, #EGU2014-3784. Gaddis, L., and T. Hare (2015), Status of tools and data for planetary research, Eos, 96, dos: 10.1029/2015EO041125. Hogan, P., 2011. NASA World Wind: Infrastructure for Spatial Data. Technical report. Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications ACM. Oosthoek, J.H.P, et al. (2013) Advances in Space Research. doi: 10.1016/j.asr.2013.07.002. Rossi, A. P., et al. (2014) PlanetServer/EarthServer: Big Data analytics in Planetary Science. Geophysical Research Abstracts, Vol. 16, #EGU2014-5149.
Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure
NASA Astrophysics Data System (ADS)
Antolik, L.; Shiro, B.; Friberg, P. A.
2016-12-01
The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.
NASA Astrophysics Data System (ADS)
Pezzi, M.; Favaro, M.; Gregori, D.; Ricci, P. P.; Sapunenko, V.
2014-06-01
In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.
Software-supported USER cloning strategies for site-directed mutagenesis and DNA assembly.
Genee, Hans Jasper; Bonde, Mads Tvillinggaard; Bagger, Frederik Otzen; Jespersen, Jakob Berg; Sommer, Morten O A; Wernersson, Rasmus; Olsen, Lars Rønn
2015-03-20
USER cloning is a fast and versatile method for engineering of plasmid DNA. We have developed a user friendly Web server tool that automates the design of optimal PCR primers for several distinct USER cloning-based applications. Our Web server, named AMUSER (Automated DNA Modifications with USER cloning), facilitates DNA assembly and introduction of virtually any type of site-directed mutagenesis by designing optimal PCR primers for the desired genetic changes. To demonstrate the utility, we designed primers for a simultaneous two-position site-directed mutagenesis of green fluorescent protein (GFP) to yellow fluorescent protein (YFP), which in a single step reaction resulted in a 94% cloning efficiency. AMUSER also supports degenerate nucleotide primers, single insert combinatorial assembly, and flexible parameters for PCR amplification. AMUSER is freely available online at http://www.cbs.dtu.dk/services/AMUSER/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markidis, S.; Rizwan, U.
The use of virtual nuclear control room can be an effective and powerful tool for training personnel working in the nuclear power plants. Operators could experience and simulate the functioning of the plant, even in critical situations, without being in a real power plant or running any risk. 3D models can be exported to Virtual Reality formats and then displayed in the Virtual Reality environment providing an immersive 3D experience. However, two major limitations of this approach are that 3D models exhibit static textures, and they are not fully interactive and therefore cannot be used effectively in training personnel. Inmore » this paper we first describe a possible solution for embedding the output of a computer application in a 3D virtual scene, coupling real-world applications and VR systems. The VR system reported here grabs the output of an application running on an X server; creates a texture with the output and then displays it on a screen or a wall in the virtual reality environment. We then propose a simple model for providing interaction between the user in the VR system and the running simulator. This approach is based on the use of internet-based application that can be commanded by a laptop or tablet-pc added to the virtual environment. (authors)« less
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Hu, Liyazhou; Wang, Wei; Li, Yajie; Zhang, Jie
2017-01-01
With the continuous opening of resource acquisition and application, there are a large variety of network hardware appliances deployed as the communication infrastructure. To lunch a new network application always implies to replace the obsolete devices and needs the related space and power to accommodate it, which will increase the energy and capital investment. Network function virtualization1 (NFV) aims to address these problems by consolidating many network equipment onto industry standard elements such as servers, switches and storage. Many types of IT resources have been deployed to run Virtual Network Functions (vNFs), such as virtual switches and routers. Then how to deploy NFV in optical transport networks is a of great importance problem. This paper focuses on this problem, and gives an implementation architecture of NFV-enabled optical transport networks based on Software Defined Optical Networking (SDON) with the procedure of vNFs call and return. Especially, an implementation solution of NFV-enabled optical transport node is designed, and a parallel processing method for NFV-enabled OTN nodes is proposed. To verify the performance of NFV-enabled SDON, the protocol interaction procedures of control function virtualization and node function virtualization are demonstrated on SDON testbed. Finally, the benefits and challenges of the parallel processing method for NFV-enabled OTN nodes are simulated and analyzed.
Collective translational and rotational Monte Carlo cluster move for general pairwise interaction
NASA Astrophysics Data System (ADS)
Růžička, Štěpán; Allen, Michael P.
2014-09-01
Virtual move Monte Carlo is a cluster algorithm which was originally developed for strongly attractive colloidal, molecular, or atomistic systems in order to both approximate the collective dynamics and avoid sampling of unphysical kinetic traps. In this paper, we present the algorithm in the form, which selects the moving cluster through a wider class of virtual states and which is applicable to general pairwise interactions, including hard-core repulsion. The newly proposed way of selecting the cluster increases the acceptance probability by up to several orders of magnitude, especially for rotational moves. The results have their applications in simulations of systems interacting via anisotropic potentials both to enhance the sampling of the phase space and to approximate the dynamics.
Software platform virtualization in chemistry research and university teaching
2009-01-01
Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997
Software platform virtualization in chemistry research and university teaching.
Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver
2009-11-16
Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.
NASA Astrophysics Data System (ADS)
Drwal, Malgorzata N.; Agama, Keli; Pommier, Yves; Griffith, Renate
2013-12-01
Purely structure-based pharmacophores (SBPs) are an alternative method to ligand-based approaches and have the advantage of describing the entire interaction capability of a binding pocket. Here, we present the development of SBPs for topoisomerase I, an anticancer target with an unusual ligand binding pocket consisting of protein and DNA atoms. Different approaches to cluster and select pharmacophore features are investigated, including hierarchical clustering and energy calculations. In addition, the performance of SBPs is evaluated retrospectively and compared to the performance of ligand- and complex-based pharmacophores. SBPs emerge as a valid method in virtual screening and a complementary approach to ligand-focussed methods. The study further reveals that the choice of pharmacophore feature clustering and selection methods has a large impact on the virtual screening hit lists. A prospective application of the SBPs in virtual screening reveals that they can be used successfully to identify novel topoisomerase inhibitors.
Building an organic block storage service at CERN with Ceph
NASA Astrophysics Data System (ADS)
van der Ster, Daniel; Wiebalck, Arne
2014-06-01
Emerging storage requirements, such as the need for block storage for both OpenStack VMs and file services like AFS and NFS, have motivated the development of a generic backend storage service for CERN IT. The goals for such a service include (a) vendor neutrality, (b) horizontal scalability with commodity hardware, (c) fault tolerance at the disk, host, and network levels, and (d) support for geo-replication. Ceph is an attractive option due to its native block device layer RBD which is built upon its scalable, reliable, and performant object storage system, RADOS. It can be considered an "organic" storage solution because of its ability to balance and heal itself while living on an ever-changing set of heterogeneous disk servers. This work will present the outcome of a petabyte-scale test deployment of Ceph by CERN IT. We will first present the architecture and configuration of our cluster, including a summary of best practices learned from the community and discovered internally. Next the results of various functionality and performance tests will be shown: the cluster has been used as a backend block storage system for AFS and NFS servers as well as a large OpenStack cluster at CERN. Finally, we will discuss the next steps and future possibilities for Ceph at CERN.
Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.
Trudgian, David C; Mirzaei, Hamid
2012-12-07
We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.
2DRMP: A suite of two-dimensional R-matrix propagation codes
NASA Astrophysics Data System (ADS)
Scott, N. S.; Scott, M. P.; Burke, P. G.; Stitt, T.; Faro-Maza, V.; Denis, C.; Maniopoulou, A.
2009-12-01
The R-matrix method has proved to be a remarkably stable, robust and efficient technique for solving the close-coupling equations that arise in electron and photon collisions with atoms, ions and molecules. During the last thirty-four years a series of related R-matrix program packages have been published periodically in CPC. These packages are primarily concerned with low-energy scattering where the incident energy is insufficient to ionise the target. In this paper we describe 2DRMP, a suite of two-dimensional R-matrix propagation programs aimed at creating virtual experiments on high performance and grid architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies. Program summaryProgram title: 2DRMP Catalogue identifier: AEEA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 196 717 No. of bytes in distributed program, including test data, etc.: 3 819 727 Distribution format: tar.gz Programming language: Fortran 95, MPI Computer: Tested on CRAY XT4 [1]; IBM eServer 575 [2]; Itanium II cluster [3] Operating system: Tested on UNICOS/lc [1]; IBM AIX [2]; Red Hat Linux Enterprise AS [3] Has the code been vectorised or parallelised?: Yes. 16 cores were used for small test run Classification: 2.4 External routines: BLAS, LAPACK, PBLAS, ScaLAPACK Subprograms used: ADAZ_v1_1 Nature of problem: 2DRMP is a suite of programs aimed at creating virtual experiments on high performance architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies. Solution method: Two-dimensional R-matrix propagation theory. The (r,r) space of the internal region is subdivided into a number of subregions. Local R-matrices are constructed within each subregion and used to propagate a global R-matrix, ℜ, across the internal region. On the boundary of the internal region ℜ is transformed onto the IERM target state basis. Thus, the two-dimensional R-matrix propagation technique transforms an intractable problem into a series of tractable problems enabling the internal region to be extended far beyond that which is possible with the standard one-sector codes. A distinctive feature of the method is that both electrons are treated identically and the R-matrix basis states are constructed to allow for both electrons to be in the continuum. The subregion size is flexible and can be adjusted to accommodate the number of cores available. Restrictions: The implementation is currently restricted to electron scattering from H-like atoms and ions. Additional comments: The programs have been designed to operate on serial computers and to exploit the distributed memory parallelism found on tightly coupled high performance clusters and supercomputers. 2DRMP has been systematically and comprehensively documented using ROBODoc [4] which is an API documentation tool that works by extracting specially formatted headers from the program source code and writing them to documentation files. Running time: The wall clock running time for the small test run using 16 cores and performed on [3] is as follows: bp (7 s); rint2 (34 s); newrd (32 s); diag (21 s); amps (11 s); prop (24 s). References:HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/, accessed 22 July, 2009. HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/, accessed 22 July, 2009. HP Cluster, Itanium II cluster running Red Hat Linux Enterprise AS, Queen s University Belfast, http://www.qub.ac.uk/directorates/InformationServices/Research/HighPerformanceComputing/Services/Hardware/HPResearch/, accessed 22 July, 2009. Automating Software Documentation with ROBODoc, http://www.xs4all.nl/~rfsber/Robo/, accessed 22 July, 2009.
Interactive 3D visualization for theoretical virtual observatories
NASA Astrophysics Data System (ADS)
Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.
2018-06-01
Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.
CPU Performance Counter-Based Problem Diagnosis for Software Systems
2009-09-01
application servers and implementation techniques), this thesis only used the Enterprise Java Bean (EJB) SessionBean version of RUBiS. The PHP and Servlet ...collection statistics at the Java Virtual Machine (JVM) level can be reused for any Java application. Other examples of gray-box instrumentation include path...used gray-box approaches. For example, PinPoint [11, 14] and [29] use request tracing to diagnose Java exceptions, endless calls, and null calls in
Perspectives of mobile learning in optics and photonics
NASA Astrophysics Data System (ADS)
Curticapean, Dan; Christ, Andreas; Feißt, Markus
2010-08-01
Mobile learning (m-learning) can be considered as a new paradigm of e-learning. The developed solution enables the presentation of animations and 3D virtual reality (VR) on mobile devices and is well suited for mobile learning. Difficult relations in physics as well as intricate experiments in optics can be visualised on mobile devices without need for a personal computer. By outsourcing the computational power to a server, the coverage is worldwide.
Multipath transport for virtual private networks
2017-03-01
Using a Wi - Fi and Cellular Connection . . . . . . . . . 13 Figure 2.8 OpenVPN Interaction with Kernel. Adapted from [14]. . . . . . . 17 Figure 3.1 MPTCP...to enable a client to connect to his corporate offices using a hotel Wi - Fi connection while traveling for business. Maybe a small business is...interface of the client to each interface of the server [7]. Figure 2.7 provides a simplified scenario of a MPTCP client with Wi - Fi and cellular
HTTM - Design and Implementation of a Type-2 Hypervisor for MIPS64 Based Systems
NASA Astrophysics Data System (ADS)
Ain, Qurrat ul; Anwar, Usama; Mehmood, Muhammad Amir; Waheed, Abdul
2017-01-01
Virtualization has emerged as an attractive software solution for many problems in server domain. Recently, it has started to enrich embedded systems domain by offering features such as hardware consolidation, security, and isolation. Our objective is to bring virtualization to high-end MIPS64 based systems, such as network routers, switches, wireless base station, etc. For this purpose a Type-2 hypervisor is a viable software solution which is easy to deploy and requires no changes in host system. In this paper we present the internal design HTTM -A Type-2 hypervisor for MIPS64 based systems and demonstrate its functional correctness by using Linux Testing Project (LTP) tests. Finally, we performed LMbench tests for performance evaluation.
Working with Planetary-Scale Data in the Cloud
NASA Astrophysics Data System (ADS)
Flasher, J.
2017-12-01
When data is shared on AWS, it can be analyzed using AWS on-demand computing resources quickly and efficiently. Users can work with any amount of data without needing to download it or store their own copies. When heavy data like imagery, genomics data, or volumes of sensor data are available in AWS's cloud, the time required to copy the data to a virtual server for analysis is virtually eliminated. AWS's global infrastructure allows data providers to make their data available worldwide and ensure quick access to critical data from anywhere. In this session, we will share lessons learned from our experience supporting a global community of entrepreneurs, students and researchers by making petabytes of data freely available for anyone to use in the cloud.
Launching large computing applications on a disk-less cluster
NASA Astrophysics Data System (ADS)
Schwemmer, Rainer; Caicedo Carvajal, Juan Manuel; Neufeld, Niko
2011-12-01
The LHCb Event Filter Farm system is based on a cluster of the order of 1.500 disk-less Linux nodes. Each node runs one instance of the filtering application per core. The amount of cores in our current production environment is 8 per machine for the old cluster and 12 per machine on extension of the cluster. Each instance has to load about 1.000 shared libraries, weighting 200 MB from several directory locations from a central repository. The repository is currently hosted on a SAN and exported via NFS. The libraries are all available in the local file system cache on every node. Loading a library still causes a huge number of requests to the server though, because the loader will try to probe every available path. Measurements show there are between 100.000-200.000 calls per application instance start up. Multiplied by the numbers of cores in the farm, this translates into a veritable DDoS attack on the servers, which lasts several minutes. Since the application is being restarted frequently, a better solution had to be found.scp Rolling out the software to the nodes is out of the question, because they have no disks and the software in it's entirety is too large to put into a ram disk. To solve this problem we developed a FUSE based file systems which acts as a permanent, controllable cache that keeps the essential files that are necessary in stock.
Baun, Christian
2016-01-01
Clusters usually consist of servers, workstations or personal computers as nodes. But especially for academic purposes like student projects or scientific projects, the cost for purchase and operation can be a challenge. Single board computers cannot compete with the performance or energy-efficiency of higher-value systems, but they are an option to build inexpensive cluster systems. Because of the compact design and modest energy consumption, it is possible to build clusters of single board computers in a way that they are mobile and can be easily transported by the users. This paper describes the construction of such a cluster, useful applications and the performance of the single nodes. Furthermore, the clusters' performance and energy-efficiency is analyzed by executing the High Performance Linpack benchmark with a different number of nodes and different proportion of the systems total main memory utilized.
LVC interaction within a mixed-reality training system
NASA Astrophysics Data System (ADS)
Pollock, Brice; Winer, Eliot; Gilbert, Stephen; de la Cruz, Julio
2012-03-01
The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems.
NASA Astrophysics Data System (ADS)
Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.
2017-10-01
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, S.; Cooper, G.; Fuess, S.
The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores.more » This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.« less
Software architecture and design of the web services facilitating climate model diagnostic analysis
NASA Astrophysics Data System (ADS)
Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.
2015-12-01
Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation. PMID:26981584
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment.
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation.
Xu, Youjun; Wang, Shiwei; Hu, Qiwan; Gao, Shuaishi; Ma, Xiaomin; Zhang, Weilin; Shen, Yihang; Chen, Fangjin; Lai, Luhua; Pei, Jianfeng
2018-05-10
CavityPlus is a web server that offers protein cavity detection and various functional analyses. Using protein three-dimensional structural information as the input, CavityPlus applies CAVITY to detect potential binding sites on the surface of a given protein structure and rank them based on ligandability and druggability scores. These potential binding sites can be further analysed using three submodules, CavPharmer, CorrSite, and CovCys. CavPharmer uses a receptor-based pharmacophore modelling program, Pocket, to automatically extract pharmacophore features within cavities. CorrSite identifies potential allosteric ligand-binding sites based on motion correlation analyses between cavities. CovCys automatically detects druggable cysteine residues, which is especially useful to identify novel binding sites for designing covalent allosteric ligands. Overall, CavityPlus provides an integrated platform for analysing comprehensive properties of protein binding cavities. Such analyses are useful for many aspects of drug design and discovery, including target selection and identification, virtual screening, de novo drug design, and allosteric and covalent-binding drug design. The CavityPlus web server is freely available at http://repharma.pku.edu.cn/cavityplus or http://www.pkumdl.cn/cavityplus.
BlueSky Cloud - rapid infrastructure capacity using Amazon's Cloud for wildfire emergency response
NASA Astrophysics Data System (ADS)
Haderman, M.; Larkin, N. K.; Beach, M.; Cavallaro, A. M.; Stilley, J. C.; DeWinter, J. L.; Craig, K. J.; Raffuse, S. M.
2013-12-01
During peak fire season in the United States, many large wildfires often burn simultaneously across the country. Smoke from these fires can produce air quality emergencies. It is vital that incident commanders, air quality agencies, and public health officials have smoke impact information at their fingertips for evaluating where fires and smoke are and where the smoke will go next. To address the need for this kind of information, the U.S. Forest Service AirFire Team created the BlueSky Framework, a modeling system that predicts concentrations of particle pollution from wildfires. During emergency response, decision makers use BlueSky predictions to make public outreach and evacuation decisions. The models used in BlueSky predictions are computationally intensive, and the peak fire season requires significantly more computer resources than off-peak times. Purchasing enough hardware to run the number of BlueSky Framework runs that are needed during fire season is expensive and leaves idle servers running the majority of the year. The AirFire Team and STI developed BlueSky Cloud to take advantage of Amazon's virtual servers hosted in the cloud. With BlueSky Cloud, as demand increases and decreases, servers can be easily spun up and spun down at a minimal cost. Moving standard BlueSky Framework runs into the Amazon Cloud made it possible for the AirFire Team to rapidly increase the number of BlueSky Framework instances that could be run simultaneously without the costs associated with purchasing and managing servers. In this presentation, we provide an overview of the features of BlueSky Cloud, describe how the system uses Amazon Cloud, and discuss the costs and benefits of moving from privately hosted servers to a cloud-based infrastructure.
Research on a Method of Geographical Information Service Load Balancing
NASA Astrophysics Data System (ADS)
Li, Heyuan; Li, Yongxing; Xue, Zhiyong; Feng, Tao
2018-05-01
With the development of geographical information service technologies, how to achieve the intelligent scheduling and high concurrent access of geographical information service resources based on load balancing is a focal point of current study. This paper presents an algorithm of dynamic load balancing. In the algorithm, types of geographical information service are matched with the corresponding server group, then the RED algorithm is combined with the method of double threshold effectively to judge the load state of serve node, finally the service is scheduled based on weighted probabilistic in a certain period. At the last, an experiment system is built based on cluster server, which proves the effectiveness of the method presented in this paper.
Venkatesan, Santhosh K.; Dubey, Vikash Kumar
2012-01-01
Structure-based virtual screening of NCI Diversity set II compounds was performed to indentify novel inhibitor scaffolds of trypanothione reductase (TR) from Leishmania infantum. The top 50 ranked hits were clustered using the AuPoSOM tool. Majority of the top-ranked compounds were Tricyclic. Clustering of hits yielded four major clusters each comprising varying number of subclusters differing in their mode of binding and orientation in the active site. Moreover, for the first time, we report selected alkaloids and dibenzothiazepines as inhibitors of Leishmania infantum TR. The mode of binding observed among the clusters also potentiates the probable in vitro inhibition kinetics and aids in defining key interaction which might contribute to the inhibition of enzymatic reduction of T[S] 2. The method provides scope for automation and integration into the virtual screening process employing docking softwares, for clustering the small molecule inhibitors based upon protein-ligand interactions. PMID:22550471
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
Interactive Machine Learning at Scale with CHISSL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arendt, Dustin L.; Grace, Emily A.; Volkova, Svitlana
We demonstrate CHISSL, a scalable client-server system for real-time interactive machine learning. Our system is capa- ble of incorporating user feedback incrementally and imme- diately without a structured or pre-defined prediction task. Computation is partitioned between a lightweight web-client and a heavyweight server. The server relies on representation learning and agglomerative clustering to learn a dendrogram, a hierarchical approximation of a representation space. The client uses only this dendrogram to incorporate user feedback into the model via transduction. Distances and predictions for each unlabeled instance are updated incrementally and deter- ministically, with O(n) space and time complexity. Our al- gorithmmore » is implemented in a functional prototype, designed to be easy to use by non-experts. The prototype organizes the large amounts of data into recommendations. This allows the user to interact with actual instances by dragging and drop- ping to provide feedback in an intuitive manner. We applied CHISSL to several domains including cyber, social media, and geo-temporal analysis.« less
Rosa, Pedro J; Morais, Diogo; Gamito, Pedro; Oliveira, Jorge; Saraiva, Tomaz
2016-03-01
Immersive virtual reality is thought to be advantageous by leading to higher levels of presence. However, and despite users getting actively involved in immersive three-dimensional virtual environments that incorporate sound and motion, there are individual factors, such as age, video game knowledge, and the predisposition to immersion, that may be associated with the quality of virtual reality experience. Moreover, one particular concern for users engaged in immersive virtual reality environments (VREs) is the possibility of side effects, such as cybersickness. The literature suggests that at least 60% of virtual reality users report having felt symptoms of cybersickness, which reduces the quality of the virtual reality experience. The aim of this study was thus to profile the right user to be involved in a VRE through head-mounted display. To examine which user characteristics are associated with the most effective virtual reality experience (lower cybersickness), a multiple correspondence analysis combined with cluster analysis technique was performed. Results revealed three distinct profiles, showing that the PC gamer profile is more associated with higher levels of virtual reality effectiveness, that is, higher predisposition to be immersed and reduced cybersickness symptoms in the VRE than console gamer and nongamer. These findings can be a useful orientation in clinical practice and future research as they help identify which users are more predisposed to benefit from immersive VREs.
Experimental Internet Environment Software Development
NASA Technical Reports Server (NTRS)
Maddux, Gary A.
1998-01-01
Geographically distributed project teams need an Internet based collaborative work environment or "Intranet." The Virtual Research Center (VRC) is an experimental Intranet server that combines several services such as desktop conferencing, file archives, on-line publishing, and security. Using the World Wide Web (WWW) as a shared space paradigm, the Graphical User Interface (GUI) presents users with images of a lunar colony. Each project has a wing of the colony and each wing has a conference room, library, laboratory, and mail station. In FY95, the VRC development team proved the feasibility of this shared space concept by building a prototype using a Netscape commerce server and several public domain programs. Successful demonstrations of the prototype resulted in approval for a second phase. Phase 2, documented by this report, will produce a seamlessly integrated environment by introducing new technologies such as Java and Adobe Web Links to replace less efficient interface software.
A Practical Approach to Spatiotemporal Data Compression
NASA Astrophysics Data System (ADS)
Prudden, Rachel; Robinson, Niall; Arribas, Alberto
2017-04-01
Datasets representing the world around us are becoming ever more unwieldy as data volumes grow. This is largely due to increased measurement and modelling resolution, but the problem is often exacerbated when data are stored at spuriously high precisions. In an effort to facilitate analysis of these datasets, computationally intensive calculations are increasingly being performed on specialised remote servers before the reduced data are transferred to the consumer. Due to bandwidth limitations, this often means data are displayed as simple 2D data visualisations, such as scatter plots or images. We present here a novel way to efficiently encode and transmit 4D data fields on-demand so that they can be locally visualised and interrogated. This nascent "4D video" format allows us to more flexibly move the boundary between data server and consumer client. However, it has applications beyond purely scientific visualisation, in the transmission of data to virtual and augmented reality.
preAssemble: a tool for automatic sequencer trace data processing.
Adzhubei, Alexei A; Laerdahl, Jon K; Vlasova, Anna V
2006-01-17
Trace or chromatogram files (raw data) are produced by automatic nucleic acid sequencing equipment or sequencers. Each file contains information which can be interpreted by specialised software to reveal the sequence (base calling). This is done by the sequencer proprietary software or publicly available programs. Depending on the size of a sequencing project the number of trace files can vary from just a few to thousands of files. Sequencing quality assessment on various criteria is important at the stage preceding clustering and contig assembly. Two major publicly available packages--Phred and Staden are used by preAssemble to perform sequence quality processing. The preAssemble pre-assembly sequence processing pipeline has been developed for small to large scale automatic processing of DNA sequencer chromatogram (trace) data. The Staden Package Pregap4 module and base-calling program Phred are utilized in the pipeline, which produces detailed and self-explanatory output that can be displayed with a web browser. preAssemble can be used successfully with very little previous experience, however options for parameter tuning are provided for advanced users. preAssemble runs under UNIX and LINUX operating systems. It is available for downloading and will run as stand-alone software. It can also be accessed on the Norwegian Salmon Genome Project web site where preAssemble jobs can be run on the project server. preAssemble is a tool allowing to perform quality assessment of sequences generated by automatic sequencing equipment. preAssemble is flexible since both interactive jobs on the preAssemble server and the stand alone downloadable version are available. Virtually no previous experience is necessary to run a default preAssemble job, on the other hand options for parameter tuning are provided. Consequently preAssemble can be used as efficiently for just several trace files as for large scale sequence processing.
Access Control of Web- and Java-Based Applications
NASA Technical Reports Server (NTRS)
Tso, Kam S.; Pajevski, Michael J.
2013-01-01
Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers
NEOview: Near Earth Object Data Discovery and Query
NASA Astrophysics Data System (ADS)
Tibbetts, M.; Elvis, M.; Galache, J. L.; Harbo, P.; McDowell, J. C.; Rudenko, M.; Van Stone, D.; Zografou, P.
2013-10-01
Missions to Near Earth Objects (NEOs) figure prominently in NASA's Flexible Path approach to human space exploration. NEOs offer insight into both the origins of the Solar System and of life, as well as a source of materials for future missions. With NEOview scientists can locate NEO datasets, explore metadata provided by the archives, and query or combine disparate NEO datasets in the search for NEO candidates for exploration. NEOview is a software system that illustrates how standards-based interfaces facilitate NEO data discovery and research. NEOview software follows a client-server architecture. The server is a configurable implementation of the International Virtual Observatory Alliance (IVOA) Table Access Protocol (TAP), a general interface for tabular data access, that can be deployed as a front end to existing NEO datasets. The TAP client, seleste, is a graphical interface that provides intuitive means of discovering NEO providers, exploring dataset metadata to identify fields of interest, and constructing queries to retrieve or combine data. It features a powerful, graphical query builder capable of easing the user's introduction to table searches. Through science use cases, NEOview demonstrates how potential targets for NEO rendezvous could be identified by combining data from complementary sources. Through deployment and operations, it has been shown that the software components are data independent and configurable to many different data servers. As such, NEOview's TAP server and seleste TAP client can be used to create a seamless environment for data discovery and exploration for tabular data in any astronomical archive.
González-González, Ana Isabel; Orrego, Carola; Perestelo-Perez, Lilisbeth; Bermejo-Caja, Carlos Jesús; Mora, Nuria; Koatz, Débora; Ballester, Marta; Del Pino, Tasmania; Pérez-Ramos, Jeannet; Toledo-Chavarri, Ana; Robles, Noemí; Pérez-Rivas, Francisco Javier; Ramírez-Puerta, Ana Belén; Canellas-Criado, Yolanda; Del Rey-Granado, Yolanda; Muñoz-Balsa, Marcos José; Becerril-Rojas, Beatriz; Rodríguez-Morales, David; Sánchez-Perruca, Luis; Vázquez, José Ramón; Aguirre, Armando
2017-10-30
Communities of practice are based on the idea that learning involves a group of people exchanging experiences and knowledge. The e-MPODERA project aims to assess the effectiveness of a virtual community of practice aimed at improving primary healthcare professional attitudes to the empowerment of patients with chronic diseases. This paper describes the protocol for a cluster randomized controlled trial. We will randomly assign 18 primary-care practices per participating region of Spain (Catalonia, Madrid and Canary Islands) to a virtual community of practice or to usual training. The primary-care practice will be the randomization unit and the primary healthcare professional will be the unit of analysis. We will need a sample of 270 primary healthcare professionals (general practitioners and nurses) and 1382 patients. We will perform randomization after professionals and patients are selected. We will ask the intervention group to participate for 12 months in a virtual community of practice based on a web 2.0 platform. We will measure the primary outcome using the Patient-Provider Orientation Scale questionnaire administered at baseline and after 12 months. Secondary outcomes will be the sociodemographic characteristics of health professionals, sociodemographic and clinical characteristics of patients, the Patient Activation Measure questionnaire for patient activation and outcomes regarding use of the virtual community of practice. We will calculate a linear mixed-effects regression to estimate the effect of participating in the virtual community of practice. This cluster randomized controlled trial will show whether a virtual intervention for primary healthcare professionals improves attitudes to the empowerment of patients with chronic diseases. ClicalTrials.gov, NCT02757781 . Registered on 25 April 2016. Protocol Version. PI15.01 22 January 2016.
2014-01-01
Introduction Translating government-funded cancer research into clinical practice can be accomplished via virtual communities of practice that include key players in the process: researchers, health care practitioners, and intermediaries. This study, conducted from November 2012 through January 2013, examined issues that key stakeholders believed should be addressed to create and sustain government-sponsored virtual communities of practice to integrate cancer control research, practice, and policy and demonstrates how concept mapping can be used to present relevant issues. Methods Key stakeholders brainstormed statements describing what is needed to create and sustain virtual communities of practice for moving cancer control research into practice. Participants rated them on importance and feasibility, selected most relevant statements, and sorted them into clusters. I used concept mapping to examine the issues identified and multidimensional scaling analyses to create a 2-dimensional conceptual map of the statement clusters. Results Participants selected 70 statements and sorted them into 9 major clusters related to creating and sustaining virtual communities of practice: 1) standardization of best practices, 2) external validity, 3) funding and resources, 4) social learning and collaboration, 5) cooperation, 6) partnerships, 7) inclusiveness, 8) social determinants and cultural competency, and 9) preparing the environment. Researchers, health care practitioners, and intermediaries were in relative agreement regarding issues of importance for creating these communities. Conclusion Virtual communities of practice can be created to address the needs of researchers, health care practitioners, and intermediaries by using input from these key stakeholders. Increasing linkages between these subgroups can improve the translation of research into practice. Similarities and differences between groups can provide valuable information to assist the government in developing virtual communities of practice. PMID:24762532
Send, Robert; Kaila, Ville R. I.; Sundholm, Dage
2011-01-01
We investigate how the reduction of the virtual space affects coupled-cluster excitation energies at the approximate singles and doubles coupled-cluster level (CC2). In this reduced-virtual-space (RVS) approach, all virtual orbitals above a certain energy threshold are omitted in the correlation calculation. The effects of the RVS approach are assessed by calculations on the two lowest excitation energies of 11 biochromophores using different sizes of the virtual space. Our set of biochromophores consists of common model systems for the chromophores of the photoactive yellow protein, the green fluorescent protein, and rhodopsin. The RVS calculations show that most of the high-lying virtual orbitals can be neglected without significantly affecting the accuracy of the obtained excitation energies. Omitting all virtual orbitals above 50 eV in the correlation calculation introduces errors in the excitation energies that are smaller than 0.1 eV . By using a RVS energy threshold of 50 eV , the CC2 calculations using triple-ζ basis sets (TZVP) on protonated Schiff base retinal are accelerated by a factor of 6. We demonstrate the applicability of the RVS approach by performing CC2∕TZVP calculations on the lowest singlet excitation energy of a rhodopsin model consisting of 165 atoms using RVS thresholds between 20 eV and 120 eV. The calculations on the rhodopsin model show that the RVS errors determined in the gas-phase are a very good approximation to the RVS errors in the protein environment. The RVS approach thus renders purely quantum mechanical treatments of chromophores in protein environments feasible and offers an ab initio alternative to quantum mechanics∕molecular mechanics separation schemes. PMID:21663351
Vinson, Cynthia A
2014-04-24
Translating government-funded cancer research into clinical practice can be accomplished via virtual communities of practice that include key players in the process: researchers, health care practitioners, and intermediaries. This study, conducted from November 2012 through January 2013, examined issues that key stakeholders believed should be addressed to create and sustain government-sponsored virtual communities of practice to integrate cancer control research, practice, and policy and demonstrates how concept mapping can be used to present relevant issues. Key stakeholders brainstormed statements describing what is needed to create and sustain virtual communities of practice for moving cancer control research into practice. Participants rated them on importance and feasibility, selected most relevant statements, and sorted them into clusters. I used concept mapping to examine the issues identified and multidimensional scaling analyses to create a 2-dimensional conceptual map of the statement clusters. Participants selected 70 statements and sorted them into 9 major clusters related to creating and sustaining virtual communities of practice: 1) standardization of best practices, 2) external validity, 3) funding and resources, 4) social learning and collaboration, 5) cooperation, 6) partnerships, 7) inclusiveness, 8) social determinants and cultural competency, and 9) preparing the environment. Researchers, health care practitioners, and intermediaries were in relative agreement regarding issues of importance for creating these communities. Virtual communities of practice can be created to address the needs of researchers, health care practitioners, and intermediaries by using input from these key stakeholders. Increasing linkages between these subgroups can improve the translation of research into practice. Similarities and differences between groups can provide valuable information to assist the government in developing virtual communities of practice.
Send, Robert; Kaila, Ville R I; Sundholm, Dage
2011-06-07
We investigate how the reduction of the virtual space affects coupled-cluster excitation energies at the approximate singles and doubles coupled-cluster level (CC2). In this reduced-virtual-space (RVS) approach, all virtual orbitals above a certain energy threshold are omitted in the correlation calculation. The effects of the RVS approach are assessed by calculations on the two lowest excitation energies of 11 biochromophores using different sizes of the virtual space. Our set of biochromophores consists of common model systems for the chromophores of the photoactive yellow protein, the green fluorescent protein, and rhodopsin. The RVS calculations show that most of the high-lying virtual orbitals can be neglected without significantly affecting the accuracy of the obtained excitation energies. Omitting all virtual orbitals above 50 eV in the correlation calculation introduces errors in the excitation energies that are smaller than 0.1 eV. By using a RVS energy threshold of 50 eV, the CC2 calculations using triple-ζ basis sets (TZVP) on protonated Schiff base retinal are accelerated by a factor of 6. We demonstrate the applicability of the RVS approach by performing CC2/TZVP calculations on the lowest singlet excitation energy of a rhodopsin model consisting of 165 atoms using RVS thresholds between 20 eV and 120 eV. The calculations on the rhodopsin model show that the RVS errors determined in the gas-phase are a very good approximation to the RVS errors in the protein environment. The RVS approach thus renders purely quantum mechanical treatments of chromophores in protein environments feasible and offers an ab initio alternative to quantum mechanics/molecular mechanics separation schemes. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Tiede, Dirk; Lang, Stefan
2010-11-01
In this paper we focus on the application of transferable, object-based image analysis algorithms for dwelling extraction in a camp for internally displaced people (IDP) in Darfur, Sudan along with innovative means for scientific visualisation of the results. Three very high spatial resolution satellite images (QuickBird: 2002, 2004, 2008) were used for: (1) extracting different types of dwellings and (2) calculating and visualizing added-value products such as dwelling density and camp structure. The results were visualized on virtual globes (Google Earth and ArcGIS Explorer) revealing the analysis results (analytical 3D views,) transformed into the third dimension (z-value). Data formats depend on virtual globe software including KML/KMZ (keyhole mark-up language) and ESRI 3D shapefiles streamed as ArcGIS Server-based globe service. In addition, means for improving overall performance of automated dwelling structures using grid computing techniques are discussed using examples from a similar study.
Navigating protected genomics data with UCSC Genome Browser in a Box.
Haeussler, Maximilian; Raney, Brian J; Hinrichs, Angie S; Clawson, Hiram; Zweig, Ann S; Karolchik, Donna; Casper, Jonathan; Speir, Matthew L; Haussler, David; Kent, W James
2015-03-01
Genome Browser in a Box (GBiB) is a small virtual machine version of the popular University of California Santa Cruz (UCSC) Genome Browser that can be run on a researcher's own computer. Once GBiB is installed, a standard web browser is used to access the virtual server and add personal data files from the local hard disk. Annotation data are loaded on demand through the Internet from UCSC or can be downloaded to the local computer for faster access. Software downloads and installation instructions are freely available for non-commercial use at https://genome-store.ucsc.edu/. GBiB requires the installation of open-source software VirtualBox, available for all major operating systems, and the UCSC Genome Browser, which is open source and free for non-commercial use. Commercial use of GBiB and the Genome Browser requires a license (http://genome.ucsc.edu/license/). © The Author 2014. Published by Oxford University Press.
Simplified Virtualization in a HEP/NP Environment with Condor
NASA Astrophysics Data System (ADS)
Strecker-Kellogg, W.; Caramarcu, C.; Hollowell, C.; Wong, T.
2012-12-01
In this work we will address the development of a simple prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM and the libvirt API for virtualization, and the Condor batch software to manage virtual machines. The discussion in this paper provides details on our experience with building, configuring, and deploying the various components from bare metal, including the base OS, creation and distribution of the virtualized OS images and the integration of batch services with the virtual machines. Our focus was on simplicity and interoperability with our existing architecture.
The Czech National Grid Infrastructure
NASA Astrophysics Data System (ADS)
Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.
2017-10-01
The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.
Template-free modeling by LEE and LEER in CASP11.
Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Maly, Kurt; Shen, Stewart N. T.; Zubair, Mohammad
1998-01-01
We describe NCSTRL+, a unified, canonical digital library for scientific and technical information (STI). NCSTRL+ is based on the Networked Computer Science Technical Report Library (NCSTRL), a World Wide Web (WWW) accessible digital library (DL) that provides access to over 100 university departments and laboratories. NCSTRL+ implements two new technologies: cluster functionality and publishing buckets. We have extended Dienst, the protocol underlying NCSTRL, to provide the ability to cluster independent collections into a logically centralized digital library based upon subject category classification, type of organization, and genres of material. The bucket construct provides a mechanism for publishing and managing logically linked entities with multiple data forms as a single object. The NCSTRL+ prototype DL contains the holdings of NCSTRL and the NASA Technical Report Server (NTRS). The prototype demonstrates the feasibility of publishing into a multi-cluster DL, searching across clusters, and storing and presenting buckets of information.
Yamashita, Yoshinori; Ogaito, Tatoku
2013-01-01
In our hospital, we managed an electronic health record system and many section subsystems as a hospital information system. By the expansion of these information systems, a system becomes complicated, and maintenance and operative cost increased. Furthermore, the environment that is available to medical information is demanded anywhere anytime by expansion of the computerization. However, the expansion of the information use becomes necessary for the expansion such as the personal protection of information for security. We became rebuilding and the private cloud of the hospital information system by the virtualization technology to solve such a problem. As a result, we were able to perform a decrease in number of the servers which constituted a system, a decrease in network traffic, reduction of the operative cost.
Submorphotypes of the maxillary first molar and their effects on alignment and rotation.
Kim, Hong-Kyun; Kwon, Ho Beom; Hyun, Hong-Keun; Jung, Min-Ho; Han, Seong Ho; Park, Young-Seok
2014-09-01
The aim of this study was to explore the shape differences in maxillary first molars with orthographic measurements using 3-dimensional virtual models to assess whether there is variability in morphology that could affect the alignment results when treated by straight-wire appliance systems. A total of 175 maxillary first molars with 4 cusps were selected for classification. With 3-dimensional laser scanning and reconstruction software, virtual casts were constructed. After performing several linear and angular measurements on the virtual occlusal plane, the teeth were clustered into 2 groups by the method of partitioning around medoids. To visualize the 2 groups, occlusal polygons were constructed using the average data of these groups. The resultant 2 clusters showed statistically significant differences in the measurements describing the cusp locations and the buccal and lingual outlines. The rotation along the centers made the 2 cluster polygons look similar, but there was a difference in the direction of the midsagittal lines. There was considerable variability in morphology according to 2 clusters in the population of this study. The occlusal polygons showed that the outlines of the 2 clusters were similar, but the midsagittal line directions and inner geometries were different. The difference between the morphologies of the 2 clusters could result in occlusal contact differences, which might be considered for better alignment of the maxillary posterior segment. Copyright © 2014 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Li, Hongtao; Guo, Feng; Zhang, Wenyin; Wang, Jie; Xing, Jinsheng
2018-02-14
The widely use of IoT technologies in healthcare services has pushed forward medical intelligence level of services. However, it also brings potential privacy threat to the data collection. In healthcare services system, health and medical data that contains privacy information are often transmitted among networks, and such privacy information should be protected. Therefore, there is a need for privacy-preserving data collection (PPDC) scheme to protect clients (patients) data. We adopt (a,k)-anonymity model as privacy pretection scheme for data collection, and propose a novel anonymity-based PPDC method for healthcare services in this paper. The threat model is analyzed in the client-server-to-user (CS2U) model. On client-side, we utilize (a,k)-anonymity notion to generate anonymous tuples which can resist possible attack, and adopt a bottom-up clustering method to create clusters that satisfy a base privacy level of (a 1 ,k 1 )-anonymity. On server-side, we reduce the communication cost through generalization technology, and compress (a 1 ,k 1 )-anonymous data through an UPGMA-based cluster combination method to make the data meet the deeper level of privacy (a 2 ,k 2 )-anonymity (a 1 ≥ a 2 , k 2 ≥ k 1 ). Theoretical analysis and experimental results prove that our scheme is effective in privacy-preserving and data quality.
The PARIGA server for real time filtering and analysis of reciprocal BLAST results.
Orsini, Massimiliano; Carcangiu, Simone; Cuccuru, Gianmauro; Uva, Paolo; Tramontano, Anna
2013-01-01
BLAST-based similarity searches are commonly used in several applications involving both nucleotide and protein sequences. These applications span from simple tasks such as mapping sequences over a database to more complex procedures as clustering or annotation processes. When the amount of analysed data increases, manual inspection of BLAST results become a tedious procedure. Tools for parsing or filtering BLAST results for different purposes are then required. We describe here PARIGA (http://resources.bioinformatica.crs4.it/pariga/), a server that enables users to perform all-against-all BLAST searches on two sets of sequences selected by the user. Moreover, since it stores the two BLAST output in a python-serialized-objects database, results can be filtered according to several parameters in real-time fashion, without re-running the process and avoiding additional programming efforts. Results can be interrogated by the user using logical operations, for example to retrieve cases where two queries match same targets, or when sequences from the two datasets are reciprocal best hits, or when a query matches a target in multiple regions. The Pariga web server is designed to be a helpful tool for managing the results of sequence similarity searches. The design and implementation of the server renders all operations very fast and easy to use.
NASA Astrophysics Data System (ADS)
Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.
2010-12-01
The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.
Workload Characterization and Performance Implications of Large-Scale Blog Servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Myeongjae; Kim, Youngjae; Hwang, Jeaho
With the ever-increasing popularity of social network services (SNSs), an understanding of the characteristics of these services and their effects on the behavior of their host servers is critical. However, there has been a lack of research on the workload characterization of servers running SNS applications such as blog services. To fill this void, we empirically characterized real-world web server logs collected from one of the largest South Korean blog hosting sites for 12 consecutive days. The logs consist of more than 96 million HTTP requests and 4.7 TB of network traffic. Our analysis reveals the followings: (i) The transfermore » size of non-multimedia files and blog articles can be modeled using a truncated Pareto distribution and a log-normal distribution, respectively; (ii) User access for blog articles does not show temporal locality, but is strongly biased towards those posted with image or audio files. We additionally discuss the potential performance improvement through clustering of small files on a blog page into contiguous disk blocks, which benefits from the observed file access patterns. Trace-driven simulations show that, on average, the suggested approach achieves 60.6% better system throughput and reduces the processing time for file access by 30.8% compared to the best performance of the Ext4 file system.« less
New Additions to the ClusPro Server Motivated by CAPRI
Vajda, Sandor; Yueh, Christine; Beglov, Dmitri; Bohnuud, Tanggis; Mottarella, Scott E.; Xia, Bing; Hall, David R.; Kozakov, Dima
2016-01-01
The heavily used protein-protein docking server ClusPro performs three computational steps as follows: (1) rigid body docking, (2) RMSD based clustering of the 1000 lowest energy structures, and (3) the removal of steric clashes by energy minimization. In response to challenges encountered in recent CAPRI targets, we added three new options to ClusPro. These are (1) accounting for Small Angle X-ray Scattering (SAXS) data in docking; (2) considering pairwise interaction data as restraints; and (3) enabling discrimination between biological and crystallographic dimers. In addition, we have developed an extremely fast docking algorithm based on 5D rotational manifold FFT, and an algorithm for docking flexible peptides that include known sequence motifs. We feel that these developments will further improve the utility of ClusPro. However, CAPRI emphasized several shortcomings of the current server, including the problem of selecting the right energy parameters among the five options provided, and the problem of selecting the best models among the 10 generated for each parameter set. In addition, results convinced us that further development is needed for docking homology models. Finally we discuss the difficulties we have encountered when attempting to develop a refinement algorithm that would be computationally efficient enough for inclusion in a heavily used server. PMID:27936493
InterProSurf: a web server for predicting interacting sites on protein surfaces
Negi, Surendra S.; Schein, Catherine H.; Oezguen, Numan; Power, Trevor D.; Braun, Werner
2009-01-01
Summary A new web server, InterProSurf, predicts interacting amino acid residues in proteins that are most likely to interact with other proteins, given the 3D structures of subunits of a protein complex. The prediction method is based on solvent accessible surface area of residues in the isolated subunits, a propensity scale for interface residues and a clustering algorithm to identify surface regions with residues of high interface propensities. Here we illustrate the application of InterProSurf to determine which areas of Bacillus anthracis toxins and measles virus hemagglutinin protein interact with their respective cell surface receptors. The computationally predicted regions overlap with those regions previously identified as interface regions by sequence analysis and mutagenesis experiments. PMID:17933856
Rocket Science for the Internet
NASA Technical Reports Server (NTRS)
2000-01-01
Rainfinity, a company resulting from the commercialization of Reliable Array of Independent Nodes (RAIN), produces the product, Rainwall. Rainwall runs a cluster of computer workstations, creating a distributed Internet gateway. When Rainwall detects a failure in software or hardware, traffic is shifted to a healthy gateway without interruptions to Internet service. It more evenly distributes workload across servers, providing less down time.
On-line interactive virtual experiments on nanoscience
NASA Astrophysics Data System (ADS)
Kadar, Manuella; Ileana, Ioan; Hutanu, Constantin
2009-01-01
This paper is an overview on the next generation web which allows students to experience virtual experiments on nano science, physics devices, processes and processing equipment. Virtual reality is used to support a real university lab in which a student can experiment real lab sessions. The web material is presented in an intuitive and highly visual 3D form that is accessible to a diverse group of students. Such type of laboratory provides opportunities for professional and practical education for a wide range of users. The expensive equipment and apparatuses that build the experimental stage in a particular standard laboratory is used to create virtual educational research laboratories. Students learn how to prepare the apparatuses and facilities for the experiment. The online experiments metadata schema is the format for describing online experiments, much like the schema behind a library catalogue used to describe the books in a library. As an online experiment is a special kind of learning object, one specifies its schema as an extension to an established metadata schema for learning objects. The content of the courses, metainformation as well as readings and user data are saved on the server in a database as XML objects.
A practical approach to virtualization in HEP
NASA Astrophysics Data System (ADS)
Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.
2011-01-01
In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.
A telemedicine system for enabling teaching activities.
Masero, V; Sanchez, F M; Uson, J
2000-01-01
In order to improve the distance teaching of minimally invasive surgery techniques, an integrated system has been developed. It comprises a telecommunications system, a server, a workstation, some medical peripherals and several computer applications developed in the Minimally Invasive Surgery Centre. The latest peripherals, such as robotized teleoperating systems for telesurgery and virtual reality peripherals, have been added. The visualization of the zone to be treated, along with the teacher's explanations, enables the student to understand the procedures of the operation much better.
2010-09-01
NNWC) was used to calculate major cost components—labor, hardware, software , and transport, while a VMware tool was used to calculate power and...cooling costs for both solutions. In addition, VMware provided a cost estimate for the upfront hardware and software licensing costs needed to support...cost per seat (CPS) model developed by Naval Network Warfare Command (NNWC) was used to calculate major cost components—labor, hardware, software , and
EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models
NASA Astrophysics Data System (ADS)
Laxton, John; Sen, Marcus; Passmore, James
2013-04-01
EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be constructed to calculate, for example, the thickness between two surfaces in a 3D model or the depth from ground surface to the top of a particular geologic unit. In the first version of the service a simple interface showing some example queries has been implemented in order to show the potential of the technologies. The project aims to develop the services available in light of user feedback, both in terms of the data available, the functionality and the interface. User feedback on the services guides the software and standards development aspects of the project, leading to enhanced versions of the software which will be implemented in upgraded versions of the services during the lifetime of the project.
Internet-Based Solutions for a Secure and Efficient Seismic Network
NASA Astrophysics Data System (ADS)
Bhadha, R.; Black, M.; Bruton, C.; Hauksson, E.; Stubailo, I.; Watkins, M.; Alvarez, M.; Thomas, V.
2017-12-01
The Southern California Seismic Network (SCSN), operated by Caltech and USGS, leverages modern Internet-based computing technologies to provide timely earthquake early warning for damage reduction, event notification, ShakeMap, and other data products. Here we present recent and ongoing innovations in telemetry, security, cloud computing, virtualization, and data analysis that have allowed us to develop a network that runs securely and efficiently.Earthquake early warning systems must process seismic data within seconds of being recorded, and SCSN maintains a robust and resilient network of more than 350 digital strong motion and broadband seismic stations to achieve this goal. We have continued to improve the path diversity and fault tolerance within our network, and have also developed new tools for latency monitoring and archiving.Cyberattacks are in the news almost daily, and with most of our seismic data streams running over the Internet, it is only a matter of time before SCSN is targeted. To ensure system integrity and availability across our network, we have implemented strong security, including encryption and Virtual Private Networks (VPNs).SCSN operates its own data center at Caltech, but we have also installed real-time servers on Amazon Web Services (AWS), to provide an additional level of redundancy, and eventually to allow full off-site operations continuity for our network. Our AWS systems receive data from Caltech-based import servers and directly from field locations, and are able to process the seismic data, calculate earthquake locations and magnitudes, and distribute earthquake alerts, directly from the cloud.We have also begun a virtualization project at our Caltech data center, allowing us to serve data from Virtual Machines (VMs), making efficient use of high-performance hardware and increasing flexibility and scalability of our data processing systems.Finally, we have developed new monitoring of station average noise levels at most stations. Noise monitoring is effective at identifying anthropogenic noise sources and malfunctioning acquisition equipment. We have built a dynamic display of results with sorting and mapping capabilities that allow us to quickly identify problematic sites and areas with elevated noise.
Web-Based Virtual Microscopy for Parasitology: A Novel Tool for Education and Quality Assurance
Linder, Ewert; Lundin, Mikael; Thors, Cecilia; Lebbad, Marianne; Winiecka-Krusnell, Jadwiga; Helin, Heikki; Leiva, Byron; Isola, Jorma; Lundin, Johan
2008-01-01
Background The basis for correctly assessing the burden of parasitic infections and the effects of interventions relies on a somewhat shaky foundation as long as we do not know how reliable the reported laboratory findings are. Thus virtual microscopy, successfully introduced as a histopathology tool, has been adapted for medical parasitology. Methodology/Principal Findings Specimens containing parasites in tissues, stools, and blood have been digitized and made accessible as a “webmicroscope for parasitology” (WMP) on the Internet (http://www.webmicroscope.net/parasitology).These digitized specimens can be viewed (“navigated” both in the x-axis and the y-axis) at the desired magnification by an unrestricted number of individuals simultaneously. For virtual microscopy of specimens containing stool parasites, it was necessary to develop the technique further in order to enable navigation in the z plane (i.e., “focusing”). Specimens were therefore scanned and photographed in two or more focal planes. The resulting digitized specimens consist of stacks of laterally “stiched” individual images covering the entire area of the sample photographed at high magnification. The digitized image information (∼10 GB uncompressed data per specimen) is accessible at data transfer speeds from 2 to 10 Mb/s via a network of five image servers located in different parts of Europe. Image streaming and rapid data transfer to an ordinary personal computer makes web-based virtual microscopy similar to conventional microscopy. Conclusion/Significance The potential of this novel technique in the field of medical parasitology to share identical parasitological specimens means that we can provide a “gold standard”, which can overcome several problems encountered in quality control of diagnostic parasitology. Thus, the WMP may have an impact on the reliability of data, which constitute the basis for our understanding of the vast problem of neglected tropical diseases. The WMP can be used also in the absence of a fast Internet communication. An ordinary PC, or even a laptop, may function as a local image server, e.g., in health centers in tropical endemic areas. PMID:18941514
Integration of Openstack cloud resources in BES III computing cluster
NASA Astrophysics Data System (ADS)
Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan
2017-10-01
Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.
VRML and Collaborative Environments: New Tools for Networked Visualization
NASA Astrophysics Data System (ADS)
Crutcher, R. M.; Plante, R. L.; Rajlich, P.
We present two new applications that engage the network as a tool for astronomical research and/or education. The first is a VRML server which allows users over the Web to interactively create three-dimensional visualizations of FITS images contained in the NCSA Astronomy Digital Image Library (ADIL). The server's Web interface allows users to select images from the ADIL, fill in processing parameters, and create renderings featuring isosurfaces, slices, contours, and annotations; the often extensive computations are carried out on an NCSA SGI supercomputer server without the user having an individual account on the system. The user can then download the 3D visualizations as VRML files, which may be rotated and manipulated locally on virtually any class of computer. The second application is the ADILBrowser, a part of the NCSA Horizon Image Data Browser Java package. ADILBrowser allows a group of participants to browse images from the ADIL within a collaborative session. The collaborative environment is provided by the NCSA Habanero package which includes text and audio chat tools and a white board. The ADILBrowser is just an example of a collaborative tool that can be built with the Horizon and Habanero packages. The classes provided by these packages can be assembled to create custom collaborative applications that visualize data either from local disk or from anywhere on the network.
Reducing Time to Science: Unidata and JupyterHub Technology Using the Jetstream Cloud
NASA Astrophysics Data System (ADS)
Chastang, J.; Signell, R. P.; Fischer, J. L.
2017-12-01
Cloud computing can accelerate scientific workflows, discovery, and collaborations by reducing research and data friction. We describe the deployment of Unidata and JupyterHub technologies on the NSF-funded XSEDE Jetstream cloud. With the aid of virtual machines and Docker technology, we deploy a Unidata JupyterHub server co-located with a Local Data Manager (LDM), THREDDS data server (TDS), and RAMADDA geoscience content management system. We provide Jupyter Notebooks and the pre-built Python environments needed to run them. The notebooks can be used for instruction and as templates for scientific experimentation and discovery. We also supply a large quantity of NCEP forecast model results to allow data-proximate analysis and visualization. In addition, users can transfer data using Globus command line tools, and perform their own data-proximate analysis and visualization with Notebook technology. These data can be shared with others via a dedicated TDS server for scientific distribution and collaboration. There are many benefits of this approach. Not only is the cloud computing environment fast, reliable and scalable, but scientists can analyze, visualize, and share data using only their web browser. No local specialized desktop software or a fast internet connection is required. This environment will enable scientists to spend less time managing their software and more time doing science.
Integration of virtualized worker nodes in standard batch systems
NASA Astrophysics Data System (ADS)
Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen
2010-04-01
Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.
Experience with PACS in an ATM/Ethernet switched network environment.
Pelikan, E; Ganser, A; Kotter, E; Schrader, U; Timmermann, U
1998-03-01
Legacy local area network (LAN) technologies based on shared media concepts are not adequate for the growth of a large-scale picture archiving and communication system (PACS) in a client-server architecture. First, an asymmetric network load, due to the requests of a large number of PACS clients for only a few main servers, should be compensated by communication links to the servers with a higher bandwidth compared to the clients. Secondly, as the number of PACS nodes increases, the network throughout should not measurably cut production. These requirements can easily be fulfilled using switching technologies. Here asynchronous transfer mode (ATM) is clearly one of the hottest topics in networking because the ATM architecture provides integrated support for a variety of communication services, and it supports virtual networking. On the other hand, most of the imaging modalities are not yet ready for integration into a native ATM network. For a lot of nodes already joining an Ethernet, a cost-effective and pragmatic way to benefit from the switching concept would be a combined ATM/Ethernet switching environment. This incorporates an incremental migration strategy with the immediate benefits of high-speed, high-capacity ATM (for servers and high-sophisticated display workstations), while preserving elements of the existing network technologies. In addition, Ethernet switching instead of shared media Ethernet improves the performance considerably. The LAN emulation (LANE) specification by the ATM forum defines mechanisms that allow ATM networks to coexist with legacy systems using any data networking protocol. This paper points out the suitability of this network architecture in accordance with an appropriate system design.
deepTools2: a next generation web server for deep-sequencing data analysis.
Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas
2016-07-08
We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction
Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng
2015-01-01
The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172
NASA Astrophysics Data System (ADS)
Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo
2011-09-01
Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.
Kinoshita, Kengo; Murakami, Yoichi; Nakamura, Haruki
2007-07-01
We have developed a method to predict ligand-binding sites in a new protein structure by searching for similar binding sites in the Protein Data Bank (PDB). The similarities are measured according to the shapes of the molecular surfaces and their electrostatic potentials. A new web server, eF-seek, provides an interface to our search method. It simply requires a coordinate file in the PDB format, and generates a prediction result as a virtual complex structure, with the putative ligands in a PDB format file as the output. In addition, the predicted interacting interface is displayed to facilitate the examination of the virtual complex structure on our own applet viewer with the web browser (URL: http://eF-site.hgc.jp/eF-seek).
Implementation of a World Wide Web server for the oil and gas industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, R.E.; Martin, F.D.; Emery, R.
1995-12-31
The Gas and Oil Technology Exchange and Communication Highway, (GO-TECH), provides an electronic information system for the petroleum community for the purpose of exchanging ideas, data, and technology. The personal computer-based system fosters communication and discussion by linking oil and gas producers with resource centers, government agencies, consulting firms, service companies, national laboratories, academic research groups, and universities throughout the world. The oil and gas producers are provided access to the GO-TECH World Wide Web home page via modem links, as well as Internet. The future GO-TECH applications will include the establishment of{open_quote}Virtual corporations {close_quotes} consisting of consortiums of smallmore » companies, consultants, and service companies linked by electronic information systems. These virtual corporations will have the resources and expertise previously found only in major corporations.« less
Implementation of a World Wide Web server for the oil and gas industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, R.E.; Martin, F.D.; Emery, R.
1996-10-01
The Gas and Oil Technology Exchange and Communication Highway (GO-TECH) provides an electronic information system for the petroleum community for exchanging ideas, data, and technology. The PC-based system fosters communication and discussion by linking the oil and gas producers with resource centers, government agencies, consulting firms, service companies, national laboratories, academic research groups, and universities throughout the world. The oil and gas producers can access the GO-TECH World Wide Web (WWW) home page through modem links, as well as through the Internet. Future GO-TECH applications will include the establishment of virtual corporations consisting of consortia of small companies, consultants, andmore » service companies linked by electronic information systems. These virtual corporations will have the resources and expertise previously found only in major corporations.« less
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.
BAGEL4: a user-friendly web server to thoroughly mine RiPPs and bacteriocins.
van Heel, Auke J; de Jong, Anne; Song, Chunxu; Viel, Jakob H; Kok, Jan; Kuipers, Oscar P
2018-05-21
Interest in secondary metabolites such as RiPPs (ribosomally synthesized and posttranslationally modified peptides) is increasing worldwide. To facilitate the research in this field we have updated our mining web server. BAGEL4 is faster than its predecessor and is now fully independent from ORF-calling. Gene clusters of interest are discovered using the core-peptide database and/or through HMM motifs that are present in associated context genes. The databases used for mining have been updated and extended with literature references and links to UniProt and NCBI. Additionally, we have included automated promoter and terminator prediction and the option to upload RNA expression data, which can be displayed along with the identified clusters. Further improvements include the annotation of the context genes, which is now based on a fast blast against the prokaryote part of the UniRef90 database, and the improved web-BLAST feature that dynamically loads structural data such as internal cross-linking from UniProt. Overall BAGEL4 provides the user with more information through a user-friendly web-interface which simplifies data evaluation. BAGEL4 is freely accessible at http://bagel4.molgenrug.nl.
Troshin, Peter V; Procter, James B; Sherstnev, Alexander; Barton, Daniel L; Madeira, Fábio; Barton, Geoffrey J
2018-06-01
JABAWS 2.2 is a computational framework that simplifies the deployment of web services for Bioinformatics. In addition to the five multiple sequence alignment (MSA) algorithms in JABAWS 1.0, JABAWS 2.2 includes three additional MSA programs (Clustal Omega, MSAprobs, GLprobs), four protein disorder prediction methods (DisEMBL, IUPred, Ronn, GlobPlot), 18 measures of protein conservation as implemented in AACon, and RNA secondary structure prediction by the RNAalifold program. JABAWS 2.2 can be deployed on a variety of in-house or hosted systems. JABAWS 2.2 web services may be accessed from the Jalview multiple sequence analysis workbench (Version 2.8 and later), as well as directly via the JABAWS command line interface (CLI) client. JABAWS 2.2 can be deployed on a local virtual server as a Virtual Appliance (VA) or simply as a Web Application Archive (WAR) for private use. Improvements in JABAWS 2.2 also include simplified installation and a range of utility tools for usage statistics collection, and web services querying and monitoring. The JABAWS CLI client has been updated to support all the new services and allow integration of JABAWS 2.2 services into conventional scripts. A public JABAWS 2 server has been in production since December 2011 and served over 800 000 analyses for users worldwide. JABAWS 2.2 is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws. g.j.barton@dundee.ac.uk.
Applications and challenges of digital pathology and whole slide imaging.
Higgins, C
2015-07-01
Virtual microscopy is a method for digitizing images of tissue on glass slides and using a computer to view, navigate, change magnification, focus and mark areas of interest. Virtual microscope systems (also called digital pathology or whole slide imaging systems) offer several advantages for biological scientists who use slides as part of their general, pharmaceutical, biotechnology or clinical research. The systems usually are based on one of two methodologies: area scanning or line scanning. Virtual microscope systems enable automatic sample detection, virtual-Z acquisition and creation of focal maps. Virtual slides are layered with multiple resolutions at each location, including the highest resolution needed to allow more detailed review of specific regions of interest. Scans may be acquired at 2, 10, 20, 40, 60 and 100 × or a combination of magnifications to highlight important detail. Digital microscopy starts when a slide collection is put into an automated or manual scanning system. The original slides are archived, then a server allows users to review multilayer digital images of the captured slides either by a closed network or by the internet. One challenge for adopting the technology is the lack of a universally accepted file format for virtual slides. Additional challenges include maintaining focus in an uneven sample, detecting specimens accurately, maximizing color fidelity with optimal brightness and contrast, optimizing resolution and keeping the images artifact-free. There are several manufacturers in the field and each has not only its own approach to these issues, but also its own image analysis software, which provides many options for users to enhance the speed, quality and accuracy of their process through virtual microscopy. Virtual microscope systems are widely used and are trusted to provide high quality solutions for teleconsultation, education, quality control, archiving, veterinary medicine, research and other fields.
Astronomical database and VO-tools of Nikolaev Astronomical Observatory
NASA Astrophysics Data System (ADS)
Mazhaev, A. E.; Protsyuk, Yu. I.
2010-05-01
Results of work in 2006-2009 on creation of astronomical databases aiming at development of Nikolaev Virtual Observatory (NVO) are presented in this abstract. Results of observations and theirreduction, which were obtained during the whole history of Nikolaev Astronomical Observatory (NAO), are included in the databases. The databases may be considered as a basis for construction of a data centre. Images of different regions of the celestial sphere have been stored in NAO since 1929. About 8000 photo plates were obtained during observations in the 20th century. Observations with CCD have been started since 1996. Annually, telescopes of NAO, using CCD cameras, create data volume of several tens of gigabytes (GB) in the form of CCD images and up to 100 GB of video records. At the end of 2008, the volume of accumulated data in the form of CCD images was about 300 GB. Problems of data volume growth are common in astronomy, nuclear physics and bioinformatics. Therefore, the astronomical community needs to use archives, databases and distributed grid computing to cope with this problem in astronomy. The International Virtual Observatory Alliance (IVOA) was formed in June 2002 with a mission to "enable the international utilization of astronomical archives..." The NVO was created at the NAO website in 2008, and consists of three main parts. The first part contains 27 astrometric stellar catalogues with short descriptions. The files of catalogues were compiled in the standard VOTable format using eXtensible Markup Language (XML), and they are available for downloading. This is an example of the so-called science-ready product. The VOTable format was developed by the International Virtual Observatory Alliance (IVOA) for exchange of tabular data. A user may download these catalogues and open them using any standalone application that supports standards of the IVOA. There are several directions of development for such applications, for example, search of catalogues and images, search and visualisation of spectra, spectral energy distribution (SED) building, search of cross-correlation between objects in different catalogues, statistical data processing of large data volumes etc. The second part includes database of observations, accumulated in NAO, with access via a browser. The database has a common interface for searching of textual and graphical information concerning photographic and CCD observations. The database contains: textual information about 7437 plates as well as 2700 preview images in JPEG format with resolution of 300 DPI (dots per inch); textual information about 16660 CCD frames as well as 1100 preview images in JPEG format. Absent preview images will be added to the database as soon as they will be ready after plates scanning and CCD frames processing. The user has to define the equatorial coordinates of search centre, a search radius and a period of observations. Then he or she may also specify additional filters, such as: any combination of objects given separately for plates and CCD frames, output parameters for plates, telescope names for CCD observations. Results of search are generated in the form of two tables for photographic and CCD observations. To obtain access to the source images in FITS format with support of World Coordinate System (WCS), the user has to fill and submit electronic form given after the tables. The third part includes database of observations with access via a standalone application such as Aladin, which has been developed by Strasbourg Astronomical Data Centre. To obtain access to the database, the user has to perform a series of simple actions, which are described on a corresponding site page. Then he or she may get access to the database via a server selector of Aladin, which has a menu with wide range of image and catalogue servers located world wide, including two menu items for photographic and CCD observations of a NVO image server. The user has to define the equatorial coordinates of search centre and a search radius. The search results are outputted into a main window of Aladin in textual and graphical forms using XML and Simple Object Access Protocol (SOAP). In this way, the NVO image server is integrated with other astronomical servers, using a special configuration file. The user may conveniently request information from many servers using the same server selector of Aladin, although the servers are located in different countries. Aladin has a wide range of special tools for data analysis and handling, including connection with other standalone applications. As a conclusion, we should note that a research team of a data centre, which provides the infrastructure for data output to the internet, is responsible for creation of corresponding archives. Therefore, each observatory or data centre has to provide an access to its archives in accordance with the IVOA standards and a resolution adopted by the IAU XXV General Assembly #B.1, titled: Public Access to Astronomical Archives. A research team of NAO copes successfully with this task and continues to develop the NVO. Using our databases and VO-tools, we also take part in development of the Ukrainian Virtual Observatory (UkrVO). All three main parts of the NVO are used as prototypes for the UkrVO. Informational resources provided by other astronomical institutions from Ukraine will be included in corresponding databases and VO interfaces.
NASA Astrophysics Data System (ADS)
Alpert, J. C.; Wang, J.
2009-12-01
To reduce the impact of natural hazards and environmental changes, the National Centers for Environmental Prediction (NCEP) provide first alert and a preferred partner for environmental prediction services, and represents a critical national resource to operational and research communities affected by climate, weather and water. NOMADS is now delivering high availability services as part of NOAA’s official real time data dissemination at its Web Operations Center (WOC) server. The WOC is a web service used by organizational units in and outside NOAA, and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The user (client) executes what is efficient to execute on the client and the server efficiently provides format independent access services. Client applications can execute on the server, if it is desired, but the same program can be executed on the client side with no loss of efficiency. In this way this paradigm lends itself to aggregation servers that act as servers of servers listing, searching catalogs of holdings, data mining, and updating information from the metadata descriptions that enable collections of data in disparate places to be simultaneously accessed, with results processed on servers and clients to produce a needed answer. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. We demonstrate how users can use NOMADS services to select the values of Ensemble model runs over the ith Ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6-Dimensional data cube of access across the internet. The example application called the “Ensemble Probability Tool” make probability predictions of user defined weather events that can be used in remote areas for weather vulnerable circumstances. An application to access data for a verification pilot study is shown in detail in a companion paper (U06) collaboration with the World Bank and is an example of high value, usability and relevance of NCEP products and service capability over a wide spectrum of user and partner needs.
FY17 Status Report on the Computing Systems for the Yucca Mountain Project TSPA-LA Models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) weremore » used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA- type analysis on the server cluster. The current tasks included preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 12.0 and address DLL-related issues observed in the FY16 work. The model upgrade task successfully converted the Nominal Modeling case to GoldSim Versions 11.1/12. Conversions of the rest of the TSPA models were also attempted but program and operational difficulties precluded this. Upgrade of the remaining of the modeling cases and distributed processing tasks is expected to continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
COGcollator: a web server for analysis of distant relationships between homologous protein families.
Dibrova, Daria V; Konovalov, Kirill A; Perekhvatov, Vadim V; Skulachev, Konstantin V; Mulkidjanian, Armen Y
2017-11-29
The Clusters of Orthologous Groups (COGs) of proteins systematize evolutionary related proteins into specific groups with similar functions. However, the available databases do not provide means to assess the extent of similarity between the COGs. We intended to provide a method for identification and visualization of evolutionary relationships between the COGs, as well as a respective web server. Here we introduce the COGcollator, a web tool for identification of evolutionarily related COGs and their further analysis. We demonstrate the utility of this tool by identifying the COGs that contain distant homologs of (i) the catalytic subunit of bacterial rotary membrane ATP synthases and (ii) the DNA/RNA helicases of the superfamily 1. This article was reviewed by Drs. Igor N. Berezovsky, Igor Zhulin and Yuri Wolf.
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
A Multi-Discipline, Multi-Genre Digital Library for Research and Education
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Maly, Kurt; Shen, Stewart N. T.
2004-01-01
We describe NCSTRL+, a unified, canonical digital library for educational and scientific and technical information (STI). NCSTRL+ is based on the Networked Computer Science Technical Report Library (NCSTRL), a World Wide Web (WWW) accessible digital library (DL) that provides access to over 100 university departments and laboratories. NCSTRL+ implements two new technologies: cluster functionality and publishing "buckets". We have extended the Dienst protocol, the protocol underlying NCSTRL, to provide the ability to "cluster" independent collections into a logically centralized digital library based upon subject category classification, type of organization, and genres of material. The concept of "buckets" provides a mechanism for publishing and managing logically linked entities with multiple data formats. The NCSTRL+ prototype DL contains the holdings of NCSTRL and the NASA Technical Report Server (NTRS). The prototype demonstrates the feasibility of publishing into a multi-cluster DL, searching across clusters, and storing and presenting buckets of information.
Kozakov, Dima; Grove, Laurie E.; Hall, David R.; Bohnuud, Tanggis; Mottarella, Scott; Luo, Lingqi; Xia, Bing; Beglov, Dmitri; Vajda, Sandor
2016-01-01
FTMap is a computational mapping server that identifies binding hot spots of macromolecules, i.e., regions of the surface with major contributions to the ligand binding free energy. To use FTMap, users submit a protein, DNA, or RNA structure in PDB format. FTMap samples billions of positions of small organic molecules used as probes and scores the probe poses using a detailed energy expression. Regions that bind clusters of multiple probe types identify the binding hot spots, in good agreement with experimental data. FTMap serves as basis for other servers, namely FTSite to predict ligand binding sites, FTFlex to account for side chain flexibility, FTMap/param to parameterize additional probes, and FTDyn to map ensembles of protein structures. Applications include determining druggability of proteins, identifying ligand moieties that are most important for binding, finding the most bound-like conformation in ensembles of unliganded protein structures, and providing input for fragment based drug design. FTMap is more accurate than classical mapping methods such as GRID and MCSS, and is much faster than the more recent approaches to protein mapping based on mixed molecular dynamics. Using 16 probe molecules, the FTMap server finds the hot spots of an average size protein in less than an hour. Since FTFlex performs mapping for all low energy conformers of side chains in the binding site, its completion time is proportionately longer. PMID:25855957
New additions to the ClusPro server motivated by CAPRI.
Vajda, Sandor; Yueh, Christine; Beglov, Dmitri; Bohnuud, Tanggis; Mottarella, Scott E; Xia, Bing; Hall, David R; Kozakov, Dima
2017-03-01
The heavily used protein-protein docking server ClusPro performs three computational steps as follows: (1) rigid body docking, (2) RMSD based clustering of the 1000 lowest energy structures, and (3) the removal of steric clashes by energy minimization. In response to challenges encountered in recent CAPRI targets, we added three new options to ClusPro. These are (1) accounting for small angle X-ray scattering data in docking; (2) considering pairwise interaction data as restraints; and (3) enabling discrimination between biological and crystallographic dimers. In addition, we have developed an extremely fast docking algorithm based on 5D rotational manifold FFT, and an algorithm for docking flexible peptides that include known sequence motifs. We feel that these developments will further improve the utility of ClusPro. However, CAPRI emphasized several shortcomings of the current server, including the problem of selecting the right energy parameters among the five options provided, and the problem of selecting the best models among the 10 generated for each parameter set. In addition, results convinced us that further development is needed for docking homology models. Finally, we discuss the difficulties we have encountered when attempting to develop a refinement algorithm that would be computationally efficient enough for inclusion in a heavily used server. Proteins 2017; 85:435-444. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
CVTree3 Web Server for Whole-genome-based and Alignment-free Prokaryotic Phylogeny and Taxonomy.
Zuo, Guanghong; Hao, Bailin
2015-10-01
A faithful phylogeny and an objective taxonomy for prokaryotes should agree with each other and ultimately follow the genome data. With the number of sequenced genomes reaching tens of thousands, both tree inference and detailed comparison with taxonomy are great challenges. We now provide one solution in the latest Release 3.0 of the alignment-free and whole-genome-based web server CVTree3. The server resides in a cluster of 64 cores and is equipped with an interactive, collapsible, and expandable tree display. It is capable of comparing the tree branching order with prokaryotic classification at all taxonomic ranks from domains down to species and strains. CVTree3 allows for inquiry by taxon names and trial on lineage modifications. In addition, it reports a summary of monophyletic and non-monophyletic taxa at all ranks as well as produces print-quality subtree figures. After giving an overview of retrospective verification of the CVTree approach, the power of the new server is described for the mega-classification of prokaryotes and determination of taxonomic placement of some newly-sequenced genomes. A few discrepancies between CVTree and 16S rRNA analyses are also summarized with regard to possible taxonomic revisions. CVTree3 is freely accessible to all users at http://tlife.fudan.edu.cn/cvtree3/ without login requirements. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.
CVTree3 Web Server for Whole-genome-based and Alignment-free Prokaryotic Phylogeny and Taxonomy
Zuo, Guanghong; Hao, Bailin
2015-01-01
A faithful phylogeny and an objective taxonomy for prokaryotes should agree with each other and ultimately follow the genome data. With the number of sequenced genomes reaching tens of thousands, both tree inference and detailed comparison with taxonomy are great challenges. We now provide one solution in the latest Release 3.0 of the alignment-free and whole-genome-based web server CVTree3. The server resides in a cluster of 64 cores and is equipped with an interactive, collapsible, and expandable tree display. It is capable of comparing the tree branching order with prokaryotic classification at all taxonomic ranks from domains down to species and strains. CVTree3 allows for inquiry by taxon names and trial on lineage modifications. In addition, it reports a summary of monophyletic and non-monophyletic taxa at all ranks as well as produces print-quality subtree figures. After giving an overview of retrospective verification of the CVTree approach, the power of the new server is described for the mega-classification of prokaryotes and determination of taxonomic placement of some newly-sequenced genomes. A few discrepancies between CVTree and 16S rRNA analyses are also summarized with regard to possible taxonomic revisions. CVTree3 is freely accessible to all users at http://tlife.fudan.edu.cn/cvtree3/ without login requirements. PMID:26563468
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
WEBSLIDE: A "Virtual" Slide Projector Based on World Wide Web
NASA Astrophysics Data System (ADS)
Barra, Maria; Ferrandino, Salvatore; Scarano, Vittorio
1999-03-01
We present here the design key concepts of WEBSLIDE, a software project whose objective is to provide a simple, cheap and efficient solution for showing slides during lessons in computer labs. In fact, WEBSLIDE allows the video monitors of several client machines (the "STUDENTS") to be synchronously updated by the actions of a particular client machine, called the "INSTRUCTOR." The system is based on the World Wide Web and the software components of WEBSLIDE mainly consists in a WWW server, browsers and small Cgi-Bill scripts. What makes WEBSLIDE particularly appealing for small educational institutions is that WEBSLIDE is built with "off the shelf" products: it does not involve using a specifically designed program but any Netscape browser, one of the most popular browsers available on the market, is sufficient. Another possible use is to use our system to implement "guided automatic tours" through several pages or Intranets internal news bulletins: the company Web server can broadcast to all employees relevant information on their browser.
Collaborative learning using Internet2 and remote collections of stereo dissection images.
Dev, Parvati; Srivastava, Sakti; Senger, Steven
2006-04-01
We have investigated collaborative learning of anatomy over Internet2, using an application called remote stereo viewer (RSV). This application offers a unique method of teaching anatomy, using high-resolution stereoscopic images, in a client-server architecture. Rotated sequences of stereo image pairs were produced by volumetric rendering of the Visible female and by dissecting and photographing a cadaveric hand. A client-server application (RSV) was created to provide access to these image sets, using a highly interactive interface. The RSV system was used to provide a "virtual anatomy" session for students in the Stanford Medical School Gross Anatomy course. The RSV application allows both independent and collaborative modes of viewing. The most appealing aspects of the RSV application were the capacity for stereoscopic viewing and the potential to access the content remotely within a flexible temporal framework. The RSV technology, used over Internet2, thus serves as an effective complement to traditional methods of teaching gross anatomy. (c) 2006 Wiley-Liss, Inc.
Data Integration Using SOAP in the VSO
NASA Astrophysics Data System (ADS)
Tian, K. Q.; Bogart, R. S.; Davey, A.; Dimitoglou, G.; Gurman, J. B.; Hill, F.; Martens, P. C.; Wampler, S.
2003-05-01
The Virtual Solar Observatory (VSO) project has implemented a time interval search for all four participating data archives. The back-end query services are implemented as web services, and are accessible via SOAP. SOAP (Simple Object Access Protocol) defines an RPC (Remote Procedure Call) mechanism that employs HTTP as its transport and encodes the client-server interactions (request and response messages) in XML (eXtensible Markup Language) documents. In addition to its core function of identifying relevant datasets in the local archive, the SOAP server at each data provider acts as a "wrapper" that maps descriptions in an abstract data model to those in the provider-specific data model, and vice versa. It is in this way that VSO integrates heterogeneous data services and allows access to them using a common interface. Our experience with SOAP has been fruitful. It has proven to be a better alternative to traditional web access methods, namely POST and GET, because of its flexibility and interoperability.
Nakagawa, Yumiko; Sawada, Sachiko; Tomiyama, Takashi; Ueda, Yuki; Fujii, Kou; Takeshita, Kiyotaka; Kobayashi, Mitsuru; Isono, Osamu
2014-12-01
Electronic medical records(EMR)for home visits were introduced in October 2013 at our institution in order to ensure smooth cooperation between the hospital and clinic by sharing the details of a patient's medical record. A system was developed for remote desktop connections to the EMR terminal server(virtual server)with the use of an SSL-VPN. Mobile terminals and mobile printers were used. Four months after the start of this system, a survey was conducted for 41 home care professionals and other staff(physicians, nurses, and office staff). Home care staff indicated that they had problems with the system, including bad connections and operating conditions, and difficulties responding to problems when they arose. Other staff indicated that they were able to acquire patient information faster than with paper-based records. Future issues include improvements to the user-friendliness of the terminals and improved responses to problems when they occur.
NASA Astrophysics Data System (ADS)
Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.
2010-12-01
In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.
Neighbourhood typology based on virtual audit of environmental obesogenic characteristics.
Feuillet, T; Charreire, H; Roda, C; Ben Rebah, M; Mackenbach, J D; Compernolle, S; Glonti, K; Bárdos, H; Rutter, H; De Bourdeaudhuij, I; McKee, M; Brug, J; Lakerveld, J; Oppert, J-M
2016-01-01
Virtual audit (using tools such as Google Street View) can help assess multiple characteristics of the physical environment. This exposure assessment can then be associated with health outcomes such as obesity. Strengths of virtual audit include collection of large amount of data, from various geographical contexts, following standard protocols. Using data from a virtual audit of obesity-related features carried out in five urban European regions, the current study aimed to (i) describe this international virtual audit dataset and (ii) identify neighbourhood patterns that can synthesize the complexity of such data and compare patterns across regions. Data were obtained from 4,486 street segments across urban regions in Belgium, France, Hungary, the Netherlands and the UK. We used multiple factor analysis and hierarchical clustering on principal components to build a typology of neighbourhoods and to identify similar/dissimilar neighbourhoods, regardless of region. Four neighbourhood clusters emerged, which differed in terms of food environment, recreational facilities and active mobility features, i.e. the three indicators derived from factor analysis. Clusters were unequally distributed across urban regions. Neighbourhoods mostly characterized by a high level of outdoor recreational facilities were predominantly located in Greater London, whereas neighbourhoods characterized by high urban density and large amounts of food outlets were mostly located in Paris. Neighbourhoods in the Randstad conurbation, Ghent and Budapest appeared to be very similar, characterized by relatively lower residential densities, greener areas and a very low percentage of streets offering food and recreational facility items. These results provide multidimensional constructs of obesogenic characteristics that may help target at-risk neighbourhoods more effectively than isolated features. © 2016 World Obesity.
A web system of virtual morphometric globes for Mars and the Moon
NASA Astrophysics Data System (ADS)
Florinsky, I. V.; Garov, A. S.; Karachevtseva, I. P.
2018-09-01
We developed a web system of virtual morphometric globes for Mars and the Moon. As the initial data, we used 15-arc-minutes gridded global digital elevation models (DEMs) extracted from the Mars Orbiter Laser Altimeter (MOLA) and the Lunar Orbiter Laser Altimeter (LOLA) gridded archives. We derived global digital models of sixteen morphometric variables including horizontal, vertical, minimal, and maximal curvatures, as well as catchment area and topographic index. The morphometric models were integrated into the web system developed as a distributed application consisting of a client front-end and a server back-end. The following main functions are implemented in the system: (1) selection of a morphometric variable; (2) two-dimensional visualization of a calculated global morphometric model; (3) 3D visualization of a calculated global morphometric model on the sphere surface; (4) change of a globe scale; and (5) globe rotation by an arbitrary angle. Free, real-time web access to the system is provided. The web system of virtual morphometric globes can be used for geological and geomorphological studies of Mars and the Moon at the global, continental, and regional scales.
Virtual microscopy in virtual tumor banking.
Isabelle, M; Teodorovic, I; Oosterhuis, J W; Riegman, P H J; Passioukov, A; Lejeune, S; Therasse, P; Dinjens, W N M; Lam, K H; Oomen, M H A; Spatz, A; Ratcliffe, C; Knox, K; Mager, R; Kerr, D; Pezzella, F; Van Damme, B; Van de Vijver, M; Van Boven, H; Morente, M M; Alonso, S; Kerjaschki, D; Pammer, J; López-Guerrero, J A; Llombart-Bosch, A; Carbone, A; Gloghini, A; Van Veen, E B
2006-01-01
Many systems have already been designed and successfully used for sharing histology images over large distances, without transfer of the original glass slides. Rapid evolution was seen when digital images could be transferred over the Internet. Nowadays, sophisticated virtual microscope systems can be acquired, with the capability to quickly scan large batches of glass slides at high magnification and compress and store the large images on disc, which subsequently can be consulted through the Internet. The images are stored on an image server, which can give simple, easy to transfer pictures to the user specifying a certain magnification on any position in the scan. This offers new opportunities in histology review, overcoming the necessity of the dynamic telepathology systems to have compatible software systems and microscopes and in addition, an adequate connection of sufficient bandwidth. Consulting the images now only requires an Internet connection and a computer with a high quality monitor. A system of complete pathology review supporting biorepositories is described, based on the implementation of this technique in the European Human Frozen Tumor Tissue Bank (TuBaFrost).
TuBaFrost 6: virtual microscopy in virtual tumour banking.
Teodorovic, I; Isabelle, M; Carbone, A; Passioukov, A; Lejeune, S; Jaminé, D; Therasse, P; Gloghini, A; Dinjens, W N M; Lam, K H; Oomen, M H A; Spatz, A; Ratcliffe, C; Knox, K; Mager, R; Kerr, D; Pezzella, F; van Damme, B; van de Vijver, M; van Boven, H; Morente, M M; Alonso, S; Kerjaschki, D; Pammer, J; Lopez-Guerrero, J A; Llombart Bosch, A; van Veen, E-B; Oosterhuis, J W; Riegman, P H J
2006-12-01
Many systems have already been designed and successfully used for sharing histology images over large distances, without transfer of the original glass slides. Rapid evolution was seen when digital images could be transferred over the Internet. Nowadays, sophisticated Virtual Microscope systems can be acquired, with the capability to quickly scan large batches of glass slides at high magnification and compress and store the large images on disc, which subsequently can be consulted through the Internet. The images are stored on an image server, which can give simple, easy to transfer pictures to the user specifying a certain magnification on any position in the scan. This offers new opportunities in histology review, overcoming the necessity of the dynamic telepathology systems to have compatible software systems and microscopes and in addition, an adequate connection of sufficient bandwidth. Consulting the images now only requires an Internet connection and a computer with a high quality monitor. A system of complete pathology review supporting bio-repositories is described, based on the implementation of this technique in the European Human Frozen Tumor Tissue Bank (TuBaFrost).
Research on elastic resource management for multi-queue under cloud computing environment
NASA Astrophysics Data System (ADS)
CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang
2017-10-01
As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Maly, Kurt; Shen, Stewart N. T.
1997-01-01
In this paper we describe NCSTRL+, a unified, canonical digital library for scientific and technical information (STI). NCSTRL+ is based on the Networked Computer Science Technical Report Library (NCSTRL), a World Wide Web (WWW) accessible digital library (DL) that provides access to over 80 university departments and laboratories. NCSTRL+ implements two new technologies: cluster functionality and publishing "buckets." We have extended the Dienst protocol, the protocol underlying NCSTRL, to provide the ability to "cluster" independent collections into a logically centralized digital library based upon subject category classification, type of organization, and genres of material. The concept of "buckets" provides a mechanism for publishing and managing logically linked entities with multiple data formats. The NCSTRL+ prototype DL contains the holdings of NCSTRL and the NASA Technical Report Server (NTRS). The prototype demonstrates the feasibility of publishing into a multi-cluster DL, searching across clusters, and storing and presenting buckets of information. We show that the overhead for these additional capabilities is minimal to both the author and the user when compared to the equivalent process within NCSTRL.
The USGODAE Monterey Data Server
NASA Astrophysics Data System (ADS)
Sharfstein, P.; Dimitriou, D.; Hankin, S.
2005-12-01
The USGODAE Monterey Data Server (http://www.usgodae.org/) has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. The server is operated with oversight and funding from the Office of Naval Research (ONR). Support of the GODAE Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the GODAE server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data available from the Global Telecommunications System (GTS) and other FTP sites, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. It supports GODAE participants, as well as the broader oceanographic research community, and is becoming a significant node in the international GODAE program. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Presenting data with a consistent interface and ensuring its availability in the maximum number of standard formats is one of the primary challenges in hosting the many diverse formats and broad range of data used by the GODAE community. To this end, all USGODAE data sets are available in their original format via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and GODAE Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC) specifications. USGODAE serves FNMOC GRIB files from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) as OPeNDAP data sets using the GrADS Data Server (GDS). The server also provides several FNMOC custom IEEE binary format high resolution ocean analysis products and model outputs through GDS. These data sets are also made available through LAS. The Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo temperature/salinity profiling float data. The Argo collection includes all available Delayed-Mode (scientific quality controlled and corrected) data. USGODAE Argo data are served through OPeNDAP and LAS, which provide complete integration of the Argo data set into NVODS and the IOOS/DMAC. By providing researchers flexible, easy access to data through standard Internet and oceanographic interfaces, the USGODAE Monterey Data Server has become an invaluable resource for oceanographic research. Also, by promoting the community data serving projects, USGODAE strengthens the community and helps to advance the data serving standards.
Rural telemedicine project in northern New Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zink, S.; Hahn, H.; Rudnick, J.
A virtual electronic medical record system is being deployed over the Internet with security in northern New Mexico using TeleMed, a multimedia medical records management system that uses CORBA-based client-server technology and distributed database architecture. The goal of the NNM Rural Telemedicine Project is to implement TeleMed into fifteen rural clinics and two hospitals within a 25,000 square mile area of northern New Mexico. Evaluation of the project consists of three components: job task analysis, audit of immunized children, and time motion studies. Preliminary results of the evaluation components are presented.
First experience with the new .cern Top Level Domain
NASA Astrophysics Data System (ADS)
Alvarez, E.; Malo de Molina, M.; Salwerowicz, M.; Silva De Sousa, B.; Smith, T.; Wagner, A.
2017-10-01
In October 2015, CERN’s core website has been moved to a new address, http://home.cern, marking the launch of the brand new top-level domain .cern. In combination with a formal governance and registration policy, the IT infrastructure needed to be extended to accommodate the hosting of Web sites in this new top level domain. We will present the technical implementation in the framework of the CERN Web Services that allows to provide virtual hosting, a reverse proxy solution and that also includes the provisioning of SSL server certificates for secure communications.
A multipurpose computing center with distributed resources
NASA Astrophysics Data System (ADS)
Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.
2017-10-01
The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.
Privacy-preserving public auditing for data integrity in cloud
NASA Astrophysics Data System (ADS)
Shaik Saleem, M.; Murali, M.
2018-04-01
Cloud computing which has collected extent concentration from communities of research and with industry research development, a large pool of computing resources using virtualized sharing method like storage, processing power, applications and services. The users of cloud are vend with on demand resources as they want in the cloud computing. Outsourced file of the cloud user can easily tampered as it is stored at the third party service providers databases, so there is no integrity of cloud users data as it has no control on their data, therefore providing security assurance to the users data has become one of the primary concern for the cloud service providers. Cloud servers are not responsible for any data loss as it doesn’t provide the security assurance to the cloud user data. Remote data integrity checking (RDIC) licenses an information to data storage server, to determine that it is really storing an owners data truthfully. RDIC is composed of security model and ID-based RDIC where it is responsible for the security of every server and make sure the data privacy of cloud user against the third party verifier. Generally, by running a two-party Remote data integrity checking (RDIC) protocol the clients would themselves be able to check the information trustworthiness of their cloud. Within the two party scenario the verifying result is given either from the information holder or the cloud server may be considered as one-sided. Public verifiability feature of RDIC gives the privilege to all its users to verify whether the original data is modified or not. To ensure the transparency of the publicly verifiable RDIC protocols, Let’s figure out there exists a TPA who is having knowledge and efficiency to verify the work to provide the condition clearly by publicly verifiable RDIC protocols.
A Functional Approach to Hyperspectral Image Analysis in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, A.; Lindholm, D. M.; Coddington, O.; Pilewskie, P.
2017-12-01
Hyperspectral image volumes are very large. A hyperspectral image analysis (HIA) may use 100TB of data, a huge barrier to their use. Hylatis is a new NASA project to create a toolset for HIA. Through web notebook and cloud technology, Hylatis will provide a more interactive experience for HIA by defining and implementing concepts and operations for HIA, identified and vetted by subject matter experts, and callable within a general purpose language, particularly Python. Hylatis leverages LaTiS, a data access framework developed at LASP. With an OPeNDAP compliant interface plus additional server side capabilities, the LaTiS API provides a uniform interface to virtually any data source, and has been applied to various storage systems, including: file systems, databases, remote servers, and in various domains including: space science, systems administration and stock quotes. In the LaTiS architecture, data `adapters' read data into a data model, where server-side computations occur. Data `writers' write data from the data model into the desired format. The Hylatis difference is the data model. In LaTiS, data are represented as mathematical functions of independent and dependent variables. Domain semantics are not present at this level, but are instead present in higher software layers. The benefit of a domain agnostic, mathematical representation is having the power of math, particularly functional algebra, unconstrained by domain semantics. This agnosticism supports reusable server side functionality applicable in any domain, such as statistical, filtering, or projection operations. Algorithms to aggregate or fuse data can be simpler because domain semantics are separated from the math. Hylatis will map the functional model onto the Spark relational interface, thereby adding a functional interface to that big data engine.This presentation will discuss Hylatis goals, strategies, and current state.
NASA Astrophysics Data System (ADS)
Istvan Etesi, Laszlo; Tolbert, K.; Schwartz, R.; Zarro, D.; Dennis, B.; Csillaghy, A.
2010-05-01
In our project "Extending the Virtual Solar Observatory (VSO)” we have combined some of the features available in Solar Software (SSW) to produce an integrated environment for data analysis, supporting the complete workflow from data location, retrieval, preparation, and analysis to creating publication-quality figures. Our goal is an integrated analysis experience in IDL, easy-to-use but flexible enough to allow more sophisticated procedures such as multi-instrument analysis. To that end, we have made the transition from a locally oriented setting where all the analysis is done on the user's computer, to an extended analysis environment where IDL has access to services available on the Internet. We have implemented a form of Cloud Computing that uses the VSO search and a new data retrieval and pre-processing server (PrepServer) that provides remote execution of instrument-specific data preparation. We have incorporated the interfaces to the VSO search and the PrepServer into an IDL widget (SHOW_SYNOP) that provides user-friendly searching and downloading of raw solar data and optionally sends search results for pre-processing to the PrepServer prior to downloading the data. The raw and pre-processed data can be displayed with our plotting suite, PLOTMAN, which can handle different data types (light curves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. PLOTMAN is highly configurable and suited for visual data analysis and for creating publishable figures. PLOTMAN and SHOW_SYNOP work hand-in-hand for a convenient working environment. Our environment supports a growing number of solar instruments that currently includes RHESSI, SOHO/EIT, TRACE, SECCHI/EUVI, HINODE/XRT, and HINODE/EIS.
A client–server framework for 3D remote visualization of radiotherapy treatment space
Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.
2013-01-01
Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605
psRNATarget: a plant small RNA target analysis server
Dai, Xinbin; Zhao, Patrick Xuechun
2011-01-01
Plant endogenous non-coding short small RNAs (20–24 nt), including microRNAs (miRNAs) and a subset of small interfering RNAs (ta-siRNAs), play important role in gene expression regulatory networks (GRNs). For example, many transcription factors and development-related genes have been reported as targets of these regulatory small RNAs. Although a number of miRNA target prediction algorithms and programs have been developed, most of them were designed for animal miRNAs which are significantly different from plant miRNAs in the target recognition process. These differences demand the development of separate plant miRNA (and ta-siRNA) target analysis tool(s). We present psRNATarget, a plant small RNA target analysis server, which features two important analysis functions: (i) reverse complementary matching between small RNA and target transcript using a proven scoring schema, and (ii) target-site accessibility evaluation by calculating unpaired energy (UPE) required to ‘open’ secondary structure around small RNA’s target site on mRNA. The psRNATarget incorporates recent discoveries in plant miRNA target recognition, e.g. it distinguishes translational and post-transcriptional inhibition, and it reports the number of small RNA/target site pairs that may affect small RNA binding activity to target transcript. The psRNATarget server is designed for high-throughput analysis of next-generation data with an efficient distributed computing back-end pipeline that runs on a Linux cluster. The server front-end integrates three simplified user-friendly interfaces to accept user-submitted or preloaded small RNAs and transcript sequences; and outputs a comprehensive list of small RNA/target pairs along with the online tools for batch downloading, key word searching and results sorting. The psRNATarget server is freely available at http://plantgrn.noble.org/psRNATarget/. PMID:21622958
Spatial Information Processing: Standards-Based Open Source Visualization Technology
NASA Astrophysics Data System (ADS)
Hogan, P.
2009-12-01
. Spatial information intelligence is a global issue that will increasingly affect our ability to survive as a species. Collectively we must better appreciate the complex relationships that make life on Earth possible. Providing spatial information in its native context can accelerate our ability to process that information. To maximize this ability to process information, three basic elements are required: data delivery (server technology), data access (client technology), and data processing (information intelligence). NASA World Wind provides open source client and server technologies based on open standards. The possibilities for data processing and data sharing are enhanced by this inclusive infrastructure for geographic information. It is interesting that this open source and open standards approach, unfettered by proprietary constraints, simultaneously provides for entirely proprietary use of this same technology. 1. WHY WORLD WIND? NASA World Wind began as a single program with specific functionality, to deliver NASA content. But as the possibilities for virtual globe technology became more apparent, we found that while enabling a new class of information technology, we were also getting in the way. Researchers, developers and even users expressed their desire for World Wind functionality in ways that would service their specific needs. They want it in their web pages. They want to add their own features. They want to manage their own data. They told us that only with this kind of flexibility, could their objectives and the potential for this technology be truly realized. World Wind client technology is a set of development tools, a software development kit (SDK) that allows a software engineer to create applications requiring geographic visualization technology. 2. MODULAR COMPONENTRY Accelerated evolution of a technology requires that the essential elements of that technology be modular components such that each can advance independent of the other elements. World Wind therefore changed its mission from providing a single information browser to enabling a whole class of 3D geographic applications. Instead of creating a single program, World Wind is a suite of components that can be selectively used in any number of programs. World Wind technology can be a part of any application, or it can be a window in a web page. Or it can be extended with additional functionalities by application and web developers. World Wind makes it possible to include virtual globe visualization and server technology in support of any objective. The world community can continually benefit from advances made in the technology by NASA in concert with the world community. 3. OPEN SOURCE AND OPEN STANDARDS NASA World Wind is NASA Open Source software. This means that the source code is fully accessible for anyone to freely use, even in association with proprietary technology. Imagery and other data provided by the World Wind servers reside in the public domain, including the data server technology itself. This allows others to deliver their own geospatial data and to provide custom solutions based on users specific needs.
Accessible high-throughput virtual screening molecular docking software for students and educators.
Jacob, Reed B; Andersen, Tim; McDougal, Owen M
2012-05-01
We survey low cost high-throughput virtual screening (HTVS) computer programs for instructors who wish to demonstrate molecular docking in their courses. Since HTVS programs are a useful adjunct to the time consuming and expensive wet bench experiments necessary to discover new drug therapies, the topic of molecular docking is core to the instruction of biochemistry and molecular biology. The availability of HTVS programs coupled with decreasing costs and advances in computer hardware have made computational approaches to drug discovery possible at institutional and non-profit budgets. This paper focuses on HTVS programs with graphical user interfaces (GUIs) that use either DOCK or AutoDock for the prediction of DockoMatic, PyRx, DockingServer, and MOLA since their utility has been proven by the research community, they are free or affordable, and the programs operate on a range of computer platforms.
Virtual microscopy and digital cytology: state of the art.
Giansanti, Daniele; Grigioni, Mauro; D'Avenio, Giuseppe; Morelli, Sandra; Maccioni, Giovanni; Bondi, Arrigo; Giovagnoli, Maria Rosaria
2010-01-01
The paper approaches a new technological scenario relevant for the introduction of the digital cytology (D-CYT) in the health service. A detailed analysis of the state of the art on the status of the introduction of D-CYT in the hospital and more in general in the dispersed territory has been conducted. The analysis was conducted in a form of review and was arranged into two parts: the first part focused on the technological tools needed to carry out a successful service (client server architectures, e-learning, quality assurance issues); the second part focused on issues oriented to help the introduction and evaluation of the technology (specific training in D-CYT, health technology assessment in-routine application, data format standards and picture archiving computerized systems (PACS) implementation, image quality assessment, strategies of navigation, 3D-virtual-reality potentialities). The work enlightens future scenarios of actions relevant for the introduction of the technology.
Litfin, Thomas; Zhou, Yaoqi; Yang, Yuedong
2017-04-15
The high cost of drug discovery motivates the development of accurate virtual screening tools. Binding-homology, which takes advantage of known protein-ligand binding pairs, has emerged as a powerful discrimination technique. In order to exploit all available binding data, modelled structures of ligand-binding sequences may be used to create an expanded structural binding template library. SPOT-Ligand 2 has demonstrated significantly improved screening performance over its previous version by expanding the template library 15 times over the previous one. It also performed better than or similar to other binding-homology approaches on the DUD and DUD-E benchmarks. The server is available online at http://sparks-lab.org . yaoqi.zhou@griffith.edu.au or yuedong.yang@griffith.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
The EarthServer Federation: State, Role, and Contribution to GEOSS
NASA Astrophysics Data System (ADS)
Merticariu, Vlad; Baumann, Peter
2016-04-01
The intercontinental EarthServer initiative has established a European datacube platform with proven scalability: known databases exceed 100 TB, and single queries have been split across more than 1,000 cloud nodes. Its service interface being rigorously based on the OGC "Big Geo Data" standards, Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS), a series of clients can dock into the services, ranging from open-source OpenLayers and QGIS over open-source NASA WorldWind to proprietary ESRI ArcGIS. Datacube fusion in a "mix and match" style is supported by the platform technolgy, the rasdaman Array Database System, which transparently federates queries so that users simply approach any node of the federation to access any data item, internally optimized for minimal data transfer. Notably, rasdaman is part of GEOSS GCI. NASA is contributing its Web WorldWind virtual globe for user-friendly data extraction, navigation, and analysis. Integrated datacube / metadata queries are contributed by CITE. Current federation members include ESA (managed by MEEO sr.l.), Plymouth Marine Laboratory (PML), the European Centre for Medium-Range Weather Forecast (ECMWF), Australia's National Computational Infrastructure, and Jacobs University (adding in Planetary Science). Further data centers have expressed interest in joining. We present the EarthServer approach, discuss its underlying technology, and illustrate the contribution this datacube platform can make to GEOSS.
Angiuoli, Samuel V; White, James R; Matalka, Malcolm; White, Owen; Fricke, W Florian
2011-01-01
The widespread popularity of genomic applications is threatened by the "bioinformatics bottleneck" resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS) sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2), which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer) invested in 16S rRNA amplicon sequencing, microbial single-genome and metagenomics WGS projects can achieve cost-efficient bioinformatics support using CloVR in combination with Amazon EC2 as an alternative to local computing centers.
Angiuoli, Samuel V.; White, James R.; Matalka, Malcolm; White, Owen; Fricke, W. Florian
2011-01-01
Background The widespread popularity of genomic applications is threatened by the “bioinformatics bottleneck” resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. Results We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS) sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2), which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. Conclusions Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer) invested in 16S rRNA amplicon sequencing, microbial single-genome and metagenomics WGS projects can achieve cost-efficient bioinformatics support using CloVR in combination with Amazon EC2 as an alternative to local computing centers. PMID:22028928
Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters
ERIC Educational Resources Information Center
Younge, Andrew J.
2016-01-01
With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for…
Primary and Secondary Virtual Learning in New Zealand: Examining Barriers to Achieving Maturity
ERIC Educational Resources Information Center
Barbour, Michael; Davis, Niki; Wenmoth, Derek
2016-01-01
This paper describes the organisational development of virtual learning in networked rural schools in New Zealand, specifically the obstacles that e-learning clusters of rural schools face in their journey to sustainability and maturity through the lens of the Ministry's Learning Communities Online Handbook. Analysis of a nationwide purposeful…
EarthServer - 3D Visualization on the Web
NASA Astrophysics Data System (ADS)
Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes
2013-04-01
EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client and on top of HTML5, WebGL and JavaScript we have developed the X3DOM framework (www.x3dom.org), which makes possible to embed declarative X3D scenegraphs, an ISO standard XML-based file format for representing 3D computer graphics, directly within HTML, thus enabling developers to rapidly design 3D content that blends seamlessly into HTML interfaces using Javascript. This approach (commonly referred to as a polyfill layer) is used to mimic native web browser support for declarative 3D content and is an important component in our web client architecture.
NASA World Wind: A New Mission
NASA Astrophysics Data System (ADS)
Hogan, P.; Gaskins, T.; Bailey, J. E.
2008-12-01
Virtual Globes are well into their first generation, providing increasingly rich and beautiful visualization of more types and quantities of information. However, they are still mostly single and proprietary programs, akin to a web browser whose content and functionality are controlled and constrained largely by the browser's manufacturer. Today Google and Microsoft determine what we can and cannot see and do in these programs. NASA World Wind started out in nearly the same mode, a single program with limited functionality and information content. But as the possibilities of virtual globes became more apparent, we found that while enabling a new class of information visualization, we were also getting in the way. Many users want to provide World Wind functionality and information in their programs, not ours. They want it in their web pages. They want to include their own features. They told us that only with this kind of flexibility, could their objectives and the potential of the technology be truly realized. World Wind therefore changed its mission: from providing a single information browser to enabling a whole class of 3D geographic applications. Instead of creating one program, we create components to be used in any number of programs. World Wind is NASA open source software. With the source code being fully visible, anyone can readily use it and freely extend it to serve any use. Imagery and other information provided by the World Wind servers is also free and unencumbered, including the server technology to deliver geospatial data. World Wind developers can therefore provide exclusive and custom solutions based on user needs.
The USGODAE Monterey Data Server
NASA Astrophysics Data System (ADS)
Sharfstein, P. J.; Dimitriou, D.; Hankin, S. C.
2004-12-01
With oversight from the U.S. Global Ocean Data Assimilation Experiment (GODAE) Steering Committee and funding from the Office of Naval Research, the USGODAE Monterey Data Server has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. Support of the Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Because of the broad range and diverse formats of data used by the GODAE community, presenting data with a consistent interface and ensuring its availability in standard formats is a primary challenge faced by the USGODAE Server project. To this end, all USGODAE data sets are available via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC). To provide surface forcing, fluxes, and boundary conditions for ocean model research, USGODAE serves global data from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and regional data from the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). Global meteorological data and observational data from the FNMOC Ocean QC process are posted in near real-time to USGODAE. These include T/S profiles, in-situ and satellite sea surface temperature (SST), satellite altimetry, and SSM/I sea ice. They contain all of the unclassified in-situ and satellite observations used to initialize the FNMOC NOGAPS model. Also, the Naval Oceanographic Office provides daily satellite SST and SSH retrievals to USGODAE. The USGODAE Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo T/S profiling float data. USGODAE Argo data are served through OPeNDAP and LAS, providing complete integration into NVODS and the IOOS/DMAC. Due to its high reliability, ease of data access, and increasing breadth of data, the USGODAE Server is becoming an invaluable resource for both the GODAE community and the general oceanographic community. Continued integration of model, forcing, and in-situ data sets from providers throughout the world is making the USGODAE Monterey Data Server a key part of the international GODAE project.
A convergent model for distributed processing of Big Sensor Data in urban engineering networks
NASA Astrophysics Data System (ADS)
Parygin, D. S.; Finogeev, A. G.; Kamaev, V. A.; Finogeev, A. A.; Gnedkova, E. P.; Tyukov, A. P.
2017-01-01
The problems of development and research of a convergent model of the grid, cloud, fog and mobile computing for analytical Big Sensor Data processing are reviewed. The model is meant to create monitoring systems of spatially distributed objects of urban engineering networks and processes. The proposed approach is the convergence model of the distributed data processing organization. The fog computing model is used for the processing and aggregation of sensor data at the network nodes and/or industrial controllers. The program agents are loaded to perform computing tasks for the primary processing and data aggregation. The grid and the cloud computing models are used for integral indicators mining and accumulating. A computing cluster has a three-tier architecture, which includes the main server at the first level, a cluster of SCADA system servers at the second level, a lot of GPU video cards with the support for the Compute Unified Device Architecture at the third level. The mobile computing model is applied to visualize the results of intellectual analysis with the elements of augmented reality and geo-information technologies. The integrated indicators are transferred to the data center for accumulation in a multidimensional storage for the purpose of data mining and knowledge gaining.
The Ever-Present Demand for Public Computing Resources. CDS Spotlight
ERIC Educational Resources Information Center
Pirani, Judith A.
2014-01-01
This Core Data Service (CDS) Spotlight focuses on public computing resources, including lab/cluster workstations in buildings, virtual lab/cluster workstations, kiosks, laptop and tablet checkout programs, and workstation access in unscheduled classrooms. The findings are derived from 758 CDS 2012 participating institutions. A dataset of 529…
NASA Astrophysics Data System (ADS)
Sieradzan, Adam K.; Makowski, Mariusz; Augustynowicz, Antoni; Liwo, Adam
2017-03-01
A general and systematic method for the derivation of the functional expressions for the effective energy terms in coarse-grained force fields of polymer chains is proposed. The method is based on the expansion of the potential of mean force of the system studied in the cluster-cumulant series and expanding the all-atom energy in the Taylor series in the squares of interatomic distances about the squares of the distances between coarse-grained centers, to obtain approximate analytical expressions for the cluster cumulants. The primary degrees of freedom to average about are the angles for collective rotation of the atoms contained in the coarse-grained interaction sites about the respective virtual-bond axes. The approach has been applied to the revision of the virtual-bond-angle, virtual-bond-torsional, and backbone-local-and-electrostatic correlation potentials for the UNited RESidue (UNRES) model of polypeptide chains, demonstrating the strong dependence of the torsional and correlation potentials on virtual-bond angles, not considered in the current UNRES. The theoretical considerations are illustrated with the potentials calculated from the ab initio potential-energy surface of terminally blocked alanine by numerical integration and with the statistical potentials derived from known protein structures. The revised torsional potentials correctly indicate that virtual-bond angles close to 90° result in the preference for the turn and helical structures, while large virtual-bond angles result in the preference for polyproline II and extended backbone geometry. The revised correlation potentials correctly reproduce the preference for the formation of β-sheet structures for large values of virtual-bond angles and for the formation of α-helical structures for virtual-bond angles close to 90°.
Parallel Wavefront Analysis for a 4D Interferometer
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2011-01-01
This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.
NASA Astrophysics Data System (ADS)
Raup, B. H.; Khalsa, S. S.; Armstrong, R.
2007-12-01
The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.
NASA Astrophysics Data System (ADS)
Seamon, E.; Gessler, P. E.; Flathers, E.; Sheneman, L.; Gollberg, G.
2013-12-01
The Regional Approaches to Climate Change for Pacific Northwest Agriculture (REACCH PNA) is a five-year USDA/NIFA-funded coordinated agriculture project to examine the sustainability of cereal crop production systems in the Pacific Northwest, in relationship to ongoing climate change. As part of this effort, an extensive data management system has been developed to enable researchers, students, and the public, to upload, manage, and analyze various data. The REACCH PNA data management team has developed three core systems to encompass cyberinfrastructure and data management needs: 1) the reacchpna.org portal (https://www.reacchpna.org) is the entry point for all public and secure information, with secure access by REACCH PNA members for data analysis, uploading, and informational review; 2) the REACCH PNA Data Repository is a replicated, redundant database server environment that allows for file and database storage and access to all core data; and 3) the REACCH PNA Libraries which are functional groupings of data for REACCH PNA members and the public, based on their access level. These libraries are accessible thru our https://www.reacchpna.org portal. The developed system is structured in a virtual server environment (data, applications, web) that includes a geospatial database/geospatial web server for web mapping services (ArcGIS Server), use of ESRI's Geoportal Server for data discovery and metadata management (under the ISO 19115-2 standard), Thematic Realtime Environmental Distributed Data Services (THREDDS) for data cataloging, and Interactive Python notebook server (IPython) technology for data analysis. REACCH systems are housed and maintained by the Northwest Knowledge Network project (www.northwestknowledge.net), which provides data management services to support research. Initial project data harvesting and meta-tagging efforts have resulted in the interrogation and loading of over 10 terabytes of climate model output, regional entomological data, agricultural and atmospheric information, as well as imagery, publications, videos, and other soft content. In addition, the outlined data management approach has focused on the integration and interconnection of hard data (raw data output) with associated publications, presentations, or other narrative documentation - through metadata lineage associations. This harvest-and-consume data management methodology could additionally be applied to other research team environments that involve large and divergent data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springmeyer, R R; Brugger, E; Cook, R
The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for large scale data exploration; running video production labs on two networks; supporting graphics libraries and tools for end users;more » maintaining PowerWalls and assorted other displays; and developing software for searching and managing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for large scale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. The IMG group is located in the Terascale Simulation Facility, home to Dawn, Atlas, BGL, and others, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and many other displays. We deployed a 576-node Opteron/IB cluster with 72 TB of memory providing a visualization production server on our classified network. We continue to support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and an older 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. The visualization production systems includes NFS servers to provide dedicated storage for data analysis and visualization. The ASC projects have delivered new versions of visualization and scientific data management tools to end users and continue to refine them. VisIt had 4 releases during the past year, ending with VisIt 2.0. We released version 2.4 of Hopper, a Java application for managing and transferring files. This release included a graphical disk usage view which works on all types of connections and an aggregated copy feature for quickly transferring massive datasets quickly and efficiently to HPSS. We continue to use and develop Blockbuster and Telepath. Both the VisIt and IMG teams were engaged in a variety of movie production efforts during the past year in addition to the development tasks.« less
ALICE HLT Cluster operation during ALICE Run 2
NASA Astrophysics Data System (ADS)
Lehrbach, J.; Krzewicki, M.; Rohr, D.; Engel, H.; Gomez Ramirez, A.; Lindenstruth, V.; Berzano, D.;
2017-10-01
ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute cluster which reconstructs the events and compresses the data in real-time. The data compression by the HLT is a vital part of data taking especially during the heavy-ion runs in order to be able to store the data which implies that reliability of the whole cluster is an important matter. To guarantee a consistent state among all compute nodes of the HLT cluster we have automatized the operation as much as possible. For automatic deployment of the nodes we use Foreman with locally mirrored repositories and for configuration management of the nodes we use Puppet. Important parameters like temperatures, network traffic, CPU load etc. of the nodes are monitored with Zabbix. During periods without beam the HLT cluster is used for tests and as one of the WLCG Grid sites to compute offline jobs in order to maximize the usage of our cluster. To prevent interference with normal HLT operations we separate the virtual machines running the Grid jobs from the normal HLT operation via virtual networks (VLANs). In this paper we give an overview of the ALICE HLT operation in 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Fritz, John Floren
2013-08-27
Minimega is a simple emulytics platform for creating testbeds of networked devices. The platform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. Minimega attempts to allow experiments to be brought up quickly with nearly no configuration. Minimega also includes tools for simple cluster management, as well as tools for creating Linux based virtual machine images.
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
High-performance scientific computing in the cloud
NASA Astrophysics Data System (ADS)
Jorissen, Kevin; Vila, Fernando; Rehr, John
2011-03-01
Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.
antiSMASH 3.0—a comprehensive resource for the genome mining of biosynthetic gene clusters
Blin, Kai; Duddela, Srikanth; Krug, Daniel; Kim, Hyun Uk; Bruccoleri, Robert; Lee, Sang Yup; Fischbach, Michael A; Müller, Rolf; Wohlleben, Wolfgang; Breitling, Rainer; Takano, Eriko
2015-01-01
Abstract Microbial secondary metabolism constitutes a rich source of antibiotics, chemotherapeutics, insecticides and other high-value chemicals. Genome mining of gene clusters that encode the biosynthetic pathways for these metabolites has become a key methodology for novel compound discovery. In 2011, we introduced antiSMASH, a web server and stand-alone tool for the automatic genomic identification and analysis of biosynthetic gene clusters, available at http://antismash.secondarymetabolites.org. Here, we present version 3.0 of antiSMASH, which has undergone major improvements. A full integration of the recently published ClusterFinder algorithm now allows using this probabilistic algorithm to detect putative gene clusters of unknown types. Also, a new dereplication variant of the ClusterBlast module now identifies similarities of identified clusters to any of 1172 clusters with known end products. At the enzyme level, active sites of key biosynthetic enzymes are now pinpointed through a curated pattern-matching procedure and Enzyme Commission numbers are assigned to functionally classify all enzyme-coding genes. Additionally, chemical structure prediction has been improved by incorporating polyketide reduction states. Finally, in order for users to be able to organize and analyze multiple antiSMASH outputs in a private setting, a new XML output module allows offline editing of antiSMASH annotations within the Geneious software. PMID:25948579
Text grouping in patent analysis using adaptive K-means clustering algorithm
NASA Astrophysics Data System (ADS)
Shanie, Tiara; Suprijadi, Jadi; Zulhanif
2017-03-01
Patents are one of the Intellectual Property. Analyzing patent is one requirement in knowing well the development of technology in each country and in the world now. This study uses the patent document coming from the Espacenet server about Green Tea. Patent documents related to the technology in the field of tea is still widespread, so it will be difficult for users to information retrieval (IR). Therefore, it is necessary efforts to categorize documents in a specific group of related terms contained therein. This study uses titles patent text data with the proposed Green Tea in Statistical Text Mining methods consists of two phases: data preparation and data analysis stage. The data preparation phase uses Text Mining methods and data analysis stage is done by statistics. Statistical analysis in this study using a cluster analysis algorithm, the Adaptive K-Means Clustering Algorithm. Results from this study showed that based on the maximum value Silhouette, generate 87 clusters associated fifteen terms therein that can be utilized in the process of information retrieval needs.
Graphics interfaces and numerical simulations: Mexican Virtual Solar Observatory
NASA Astrophysics Data System (ADS)
Hernández, L.; González, A.; Salas, G.; Santillán, A.
2007-08-01
Preliminary results associated to the computational development and creation of the Mexican Virtual Solar Observatory (MVSO) are presented. Basically, the MVSO prototype consists of two parts: the first, related to observations that have been made during the past ten years at the Solar Observation Station (EOS) and at the Carl Sagan Observatory (OCS) of the Universidad de Sonora in Mexico. The second part is associated to the creation and manipulation of a database produced by numerical simulations related to solar phenomena, we are using the MHD ZEUS-3D code. The development of this prototype was made using mysql, apache, java and VSO 1.2. based GNU and `open source philosophy'. A graphic user interface (GUI) was created in order to make web-based, remote numerical simulations. For this purpose, Mono was used, because it is provides the necessary software to develop and run .NET client and server applications on Linux. Although this project is still under development, we hope to have access, by means of this portal, to other virtual solar observatories and to be able to count on a database created through numerical simulations or, given the case, perform simulations associated to solar phenomena.
FROG: Time Series Analysis for the Web Service Era
NASA Astrophysics Data System (ADS)
Allan, A.
2005-12-01
The FROG application is part of the next generation Starlink{http://www.starlink.ac.uk} software work (Draper et al. 2005) and released under the GNU Public License{http://www.gnu.org/copyleft/gpl.html} (GPL). Written in Java, it has been designed for the Web and Grid Service era as an extensible, pluggable, tool for time series analysis and display. With an integrated SOAP server the packages functionality is exposed to the user for use in their own code, and to be used remotely over the Grid, as part of the Virtual Observatory (VO).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berz, Martin; Makino, Kyoko
The ARRA funds were utilized to acquire a cluster of high performance computers, consisting of one Altus 2804 Server based on a Quad AMD Opteron 6174 12C with 4 2.2 GHz nodes of 12 cores each, resulting in 48 directly usable cores; as well as a Relion 1751 Server using an Intel Xeon X5677 consisting of 4 3.46 GHz cores supporting 8 threads. Both systems run the Unix flavor CentOS, which is designed for use without need of updates, which greatly enhances their reliability. The systems are used to operate our COSY INFINITY environment which supports MPI parallelization. The unitsmore » arrived at MSU in September 2010, and were taken into operation shortly thereafter.« less
Unidata Cyberinfrastructure in the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.; Young, J. W.
2016-12-01
Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.
A secure and efficiently searchable health information architecture.
Yasnoff, William A
2016-06-01
Patient-centric repositories of health records are an important component of health information infrastructure. However, patient information in a single repository is potentially vulnerable to loss of the entire dataset from a single unauthorized intrusion. A new health record storage architecture, the personal grid, eliminates this risk by separately storing and encrypting each person's record. The tradeoff for this improved security is that a personal grid repository must be sequentially searched since each record must be individually accessed and decrypted. To allow reasonable search times for large numbers of records, parallel processing with hundreds (or even thousands) of on-demand virtual servers (now available in cloud computing environments) is used. Estimated search times for a 10 million record personal grid using 500 servers vary from 7 to 33min depending on the complexity of the query. Since extremely rapid searching is not a critical requirement of health information infrastructure, the personal grid may provide a practical and useful alternative architecture that eliminates the large-scale security vulnerabilities of traditional databases by sacrificing unnecessary searching speed. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schaaf, Kjeld; Overeem, Ruud
2004-06-01
Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.
Clemente, Miriam; Rey, Beatriz; Rodriguez-Pujadas, Aina; Breton-Lopez, Juani; Barros-Loscertales, Alfonso; Baños, Rosa M; Botella, Cristina; Alcañiz, Mariano; Avila, Cesar
2014-06-27
To date, still images or videos of real animals have been used in functional magnetic resonance imaging protocols to evaluate the brain activations associated with small animals' phobia. The objective of our study was to evaluate the brain activations associated with small animals' phobia through the use of virtual environments. This context will have the added benefit of allowing the subject to move and interact with the environment, giving the subject the illusion of being there. We have analyzed the brain activation in a group of phobic people while they navigated in a virtual environment that included the small animals that were the object of their phobia. We have found brain activation mainly in the left occipital inferior lobe (P<.05 corrected, cluster size=36), related to the enhanced visual attention to the phobic stimuli; and in the superior frontal gyrus (P<.005 uncorrected, cluster size=13), which is an area that has been previously related to the feeling of self-awareness. In our opinion, these results demonstrate that virtual stimulus can enhance brain activations consistent with previous studies with still images, but in an environment closer to the real situation the subject would face in their daily lives.
Data Parallel Bin-Based Indexing for Answering Queries on Multi-Core Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gosink, Luke; Wu, Kesheng; Bethel, E. Wes
2009-06-02
The multi-core trend in CPUs and general purpose graphics processing units (GPUs) offers new opportunities for the database community. The increase of cores at exponential rates is likely to affect virtually every server and client in the coming decade, and presents database management systems with a huge, compelling disruption that will radically change how processing is done. This paper presents a new parallel indexing data structure for answering queries that takes full advantage of the increasing thread-level parallelism emerging in multi-core architectures. In our approach, our Data Parallel Bin-based Index Strategy (DP-BIS) first bins the base data, and then partitionsmore » and stores the values in each bin as a separate, bin-based data cluster. In answering a query, the procedures for examining the bin numbers and the bin-based data clusters offer the maximum possible level of concurrency; each record is evaluated by a single thread and all threads are processed simultaneously in parallel. We implement and demonstrate the effectiveness of DP-BIS on two multi-core architectures: a multi-core CPU and a GPU. The concurrency afforded by DP-BIS allows us to fully utilize the thread-level parallelism provided by each architecture--for example, our GPU-based DP-BIS implementation simultaneously evaluates over 12,000 records with an equivalent number of concurrently executing threads. In comparing DP-BIS's performance across these architectures, we show that the GPU-based DP-BIS implementation requires significantly less computation time to answer a query than the CPU-based implementation. We also demonstrate in our analysis that DP-BIS provides better overall performance than the commonly utilized CPU and GPU-based projection index. Finally, due to data encoding, we show that DP-BIS accesses significantly smaller amounts of data than index strategies that operate solely on a column's base data; this smaller data footprint is critical for parallel processors that possess limited memory resources (e.g., GPUs).« less
eWaterCycle visualisation. combining the strength of NetCDF and Web Map Service: ncWMS
NASA Astrophysics Data System (ADS)
Hut, R.; van Meersbergen, M.; Drost, N.; Van De Giesen, N.
2016-12-01
As a result of the eWatercycle global hydrological forecast we have created Cesium-ncWMS, a web application based on ncWMS and Cesium. ncWMS is a server side application capable of reading any NetCDF file written using the Climate and Forecasting (CF) conventions, and making the data available as a Web Map Service(WMS). ncWMS automatically determines available variables in a file, and creates maps colored according to map data and a user selected color scale. Cesium is a Javascript 3D virtual Globe library. It uses WebGL for rendering, which makes it very fast, and it is capable of displaying a wide variety of data types such as vectors, 3D models, and 2D maps. The forecast results are automatically uploaded to our web server running ncWMS. In turn, the web application can be used to change the settings for color maps and displayed data. The server uses the settings provided by the web application, together with the data in NetCDF to provide WMS image tiles, time series data and legend graphics to the Cesium-NcWMS web application. The user can simultaneously zoom in to the very high resolution forecast results anywhere on the world, and get time series data for any point on the globe. The Cesium-ncWMS visualisation combines a global overview with local relevant information in any browser. See the visualisation live at forecast.ewatercycle.org
DOVIS: an implementation for high-throughput virtual screening using AutoDock.
Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques
2008-02-27
Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.
Creating a Parallel Version of VisIt for Microsoft Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlock, B J; Biagas, K S; Rawson, P L
2011-12-07
VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less
Karmonik, C; Anderson, J R; Beilner, J; Ge, J J; Partovi, S; Klucznik, R P; Diaz, O; Zhang, Y J; Britz, G W; Grossman, R G; Lv, N; Huang, Q
2016-07-26
To quantify the relationship and to demonstrate redundancies between hemodynamic and structural parameters before and after virtual treatment with a flow diverter device (FDD) in cerebral aneurysms. Steady computational fluid dynamics (CFD) simulations were performed for 10 cerebral aneurysms where FDD treatment with the SILK device was simulated by virtually reducing the porosity at the aneurysm ostium. Velocity and pressure values proximal and distal to and at the aneurysm ostium as well as inside the aneurysm were quantified. In addition, dome-to-neck ratios and size ratios were determined. Multiple correlation analysis (MCA) and hierarchical cluster analysis (HCA) were conducted to demonstrate dependencies between both structural and hemodynamic parameters. Velocities in the aneurysm were reduced by 0.14m/s on average and correlated significantly (p<0.05) with velocity values in the parent artery (average correlation coefficient: 0.70). Pressure changes in the aneurysm correlated significantly with pressure values in the parent artery and aneurysm (average correlation coefficient: 0.87). MCA found statistically significant correlations between velocity values and between pressure values, respectively. HCA sorted velocity parameters, pressure parameters and structural parameters into different hierarchical clusters. HCA of aneurysms based on the parameter values yielded similar results by either including all (n=22) or only non-redundant parameters (n=2, 3 and 4). Hemodynamic and structural parameters before and after virtual FDD treatment show strong inter-correlations. Redundancy of parameters was demonstrated with hierarchical cluster analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Development of a small-scale computer cluster
NASA Astrophysics Data System (ADS)
Wilhelm, Jay; Smith, Justin T.; Smith, James E.
2008-04-01
An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.
A Tale Of 160 Scientists, Three Applications, a Workshop and a Cloud
NASA Astrophysics Data System (ADS)
Berriman, G. B.; Brinkworth, C.; Gelino, D.; Wittman, D. K.; Deelman, E.; Juve, G.; Rynge, M.; Kinney, J.
2013-10-01
The NASA Exoplanet Science Institute (NExScI) hosts the annual Sagan Workshops, thematic meetings aimed at introducing researchers to the latest tools and methodologies in exoplanet research. The theme of the Summer 2012 workshop, held from July 23 to July 27 at Caltech, was to explore the use of exoplanet light curves to study planetary system architectures and atmospheres. A major part of the workshop was to use hands-on sessions to instruct attendees in the use of three open source tools for the analysis of light curves, especially from the Kepler mission. Each hands-on session involved the 160 attendees using their laptops to follow step-by-step tutorials given by experts. One of the applications, PyKE, is a suite of Python tools designed to reduce and analyze Kepler light curves; these tools can be invoked from the Unix command line or a GUI in PyRAF. The Transit Analysis Package (TAP) uses Markov Chain Monte Carlo (MCMC) techniques to fit light curves under the Interactive Data Language (IDL) environment, and Transit Timing Variations (TTV) uses IDL tools and Java-based GUIs to confirm and detect exoplanets from timing variations in light curve fitting. Rather than attempt to run these diverse applications on the inevitable wide range of environments on attendees laptops, they were run instead on the Amazon Elastic Cloud 2 (EC2). The cloud offers features ideal for this type of short term need: computing and storage services are made available on demand for as long as needed, and a processing environment can be customized and replicated as needed. The cloud environment included an NFS file server virtual machine (VM), 20 client VMs for use by attendees, and a VM to enable ftp downloads of the attendees' results. The file server was configured with a 1 TB Elastic Block Storage (EBS) volume (network-attached storage mounted as a device) containing the application software and attendees home directories. The clients were configured to mount the applications and home directories from the server via NFS. All VMs were built with CentOS version 5.8. Attendees connected their laptops to one of the client VMs using the Virtual Network Computing (VNC) protocol, which enabled them to interact with a remote desktop GUI during the hands-on sessions. We will describe the mechanisms for handling security, failovers, and licensing of commercial software. In particular, IDL licenses were managed through a server at Caltech, connected to the IDL instances running on Amazon EC2 via a Secure Shell (ssh) tunnel. The system operated flawlessly during the workshop.
Assessment of Template-Based Modeling of Protein Structure in CASP11
Modi, Vivek; Xu, Qifang; Adhikari, Sam; Dunbrack, Roland L.
2016-01-01
We present the assessment of predictions submitted in the template-based modeling (TBM) category of CASP11 (Critical Assessment of Protein Structure Prediction). Model quality was judged on the basis of global and local measures of accuracy on all atoms including side chains. The top groups on 39 human-server targets based on model 1 predictions were LEER, Zhang, LEE, MULTICOM, and Zhang-Server. The top groups on 81 targets by server groups based on model 1 predictions were Zhang-Server, nns, BAKER-ROSETTASERVER, QUARK, and myprotein-me. In CASP11, the best models for most targets were equal to or better than the best template available in the Protein Data Bank, even for targets with poor templates. The overall performance in CASP11 is similar to the performance of predictors in CASP10 with slightly better performance on the hardest targets. For most targets, assessment measures exhibited bimodal probability density distributions. Multi-dimensional scaling of an RMSD matrix for each target typically revealed a single cluster with models similar to the target structure, with a mode in the GDT-TS density between 40 and 90, and a wide distribution of models highly divergent from each other and from the experimental structure, with density mode at a GDT-TS value of ~20. The models in this peak in the density were either compact models with entirely the wrong fold, or highly non-compact models. The results argue for a density-driven approach in future CASP TBM assessments that accounts for the bimodal nature of these distributions instead of Z-scores, which assume a unimodal, Gaussian distribution. PMID:27081927
Conchúir, Shane Ó.; Der, Bryan S.; Drew, Kevin; Kuroda, Daisuke; Xu, Jianqing; Weitzner, Brian D.; Renfrew, P. Douglas; Sripakdeevong, Parin; Borgo, Benjamin; Havranek, James J.; Kuhlman, Brian; Kortemme, Tanja; Bonneau, Richard; Gray, Jeffrey J.; Das, Rhiju
2013-01-01
The Rosetta molecular modeling software package provides experimentally tested and rapidly evolving tools for the 3D structure prediction and high-resolution design of proteins, nucleic acids, and a growing number of non-natural polymers. Despite its free availability to academic users and improving documentation, use of Rosetta has largely remained confined to developers and their immediate collaborators due to the code’s difficulty of use, the requirement for large computational resources, and the unavailability of servers for most of the Rosetta applications. Here, we present a unified web framework for Rosetta applications called ROSIE (Rosetta Online Server that Includes Everyone). ROSIE provides (a) a common user interface for Rosetta protocols, (b) a stable application programming interface for developers to add additional protocols, (c) a flexible back-end to allow leveraging of computer cluster resources shared by RosettaCommons member institutions, and (d) centralized administration by the RosettaCommons to ensure continuous maintenance. This paper describes the ROSIE server infrastructure, a step-by-step ‘serverification’ protocol for use by Rosetta developers, and the deployment of the first nine ROSIE applications by six separate developer teams: Docking, RNA de novo, ERRASER, Antibody, Sequence Tolerance, Supercharge, Beta peptide design, NCBB design, and VIP redesign. As illustrated by the number and diversity of these applications, ROSIE offers a general and speedy paradigm for serverification of Rosetta applications that incurs negligible cost to developers and lowers barriers to Rosetta use for the broader biological community. ROSIE is available at http://rosie.rosettacommons.org. PMID:23717507
Jaschob, Daniel; Riffle, Michael
2012-07-30
Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.
BRepertoire: a user-friendly web server for analysing antibody repertoire data.
Margreitter, Christian; Lu, Hui-Chun; Townsend, Catherine; Stewart, Alexander; Dunn-Walters, Deborah K; Fraternali, Franca
2018-04-14
Antibody repertoire analysis by high throughput sequencing is now widely used, but a persisting challenge is enabling immunologists to explore their data to discover discriminating repertoire features for their own particular investigations. Computational methods are necessary for large-scale evaluation of antibody properties. We have developed BRepertoire, a suite of user-friendly web-based software tools for large-scale statistical analyses of repertoire data. The software is able to use data preprocessed by IMGT, and performs statistical and comparative analyses with versatile plotting options. BRepertoire has been designed to operate in various modes, for example analysing sequence-specific V(D)J gene usage, discerning physico-chemical properties of the CDR regions and clustering of clonotypes. Those analyses are performed on the fly by a number of R packages and are deployed by a shiny web platform. The user can download the analysed data in different table formats and save the generated plots as image files ready for publication. We believe BRepertoire to be a versatile analytical tool that complements experimental studies of immune repertoires. To illustrate the server's functionality, we show use cases including differential gene usage in a vaccination dataset and analysis of CDR3H properties in old and young individuals. The server is accessible under http://mabra.biomed.kcl.ac.uk/BRepertoire.
Krüger, Dennis M; Ahmed, Aqeel; Gohlke, Holger
2012-07-01
The NMSim web server implements a three-step approach for multiscale modeling of protein conformational changes. First, the protein structure is coarse-grained using the FIRST software. Second, a rigid cluster normal-mode analysis provides low-frequency normal modes. Third, these modes are used to extend the recently introduced idea of constrained geometric simulations by biasing backbone motions of the protein, whereas side chain motions are biased toward favorable rotamer states (NMSim). The generated structures are iteratively corrected regarding steric clashes and stereochemical constraint violations. The approach allows performing three simulation types: unbiased exploration of conformational space; pathway generation by a targeted simulation; and radius of gyration-guided simulation. On a data set of proteins with experimentally observed conformational changes, the NMSim approach has been shown to be a computationally efficient alternative to molecular dynamics simulations for conformational sampling of proteins. The generated conformations and pathways of conformational transitions can serve as input to docking approaches or more sophisticated sampling techniques. The web server output is a trajectory of generated conformations, Jmol representations of the coarse-graining and a subset of the trajectory and data plots of structural analyses. The NMSim webserver, accessible at http://www.nmsim.de, is free and open to all users with no login requirement.
PseKRAAC: a flexible web server for generating pseudo K-tuple reduced amino acids composition.
Zuo, Yongchun; Li, Yuan; Chen, Yingli; Li, Guangpeng; Yan, Zhenhe; Yang, Lei
2017-01-01
The reduced amino acids perform powerful ability for both simplifying protein complexity and identifying functional conserved regions. However, dealing with different protein problems may need different kinds of cluster methods. Encouraged by the success of pseudo-amino acid composition algorithm, we developed a freely available web server, called PseKRAAC (the pseudo K-tuple reduced amino acids composition). By implementing reduced amino acid alphabets, the protein complexity can be significantly simplified, which leads to decrease chance of overfitting, lower computational handicap and reduce information redundancy. PseKRAAC delivers more capability for protein research by incorporating three crucial parameters that describes protein composition. Users can easily generate many different modes of PseKRAAC tailored to their needs by selecting various reduced amino acids alphabets and other characteristic parameters. It is anticipated that the PseKRAAC web server will become a very useful tool in computational proteomics and protein sequence analysis. Freely available on the web at http://bigdata.imu.edu.cn/psekraac CONTACTS: yczuo@imu.edu.cn or imu.hema@foxmail.com or yanglei_hmu@163.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ATLAS TDAQ System Administration: evolution and re-design
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Bogdanchikov, A.; Brasolin, F.; Contescu, C.; Dubrov, S.; Fazio, D.; Korol, A.; Lee, C. J.; Scannicchio, D. A.; Twomey, M. S.
2015-12-01
The ATLAS Trigger and Data Acquisition system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider at CERN. The online farm is composed of ∼3000 servers, processing the data read out from ∼100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown there has been a tremendous amount of work done by the ATLAS Trigger and Data Acquisition System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High- Level Trigger farm with different purposes. The OS version has been upgraded to SLC6; for the largest part of the farm, which is composed of net booted nodes, this required a completely new design of the net booting system. In parallel, the migration to Puppet of the Configuration Management systems has been completed for both net booted and local booted hosts; the Post-Boot Scripts system and Quattor have been consequently dismissed. Virtual Machine usage has been investigated and tested and many of the core servers are now running on Virtual Machines. Virtualisation has also been used to adapt the High-Level Trigger farm as a batch system, which has been used for running Monte Carlo production jobs that are mostly CPU and not I/O bound. Finally, monitoring the health and the status of ∼3000 machines in the experimental area is obviously of the utmost importance, so the obsolete Nagios v2 has been replaced with Icinga, complemented by Ganglia as a performance data provider. This paper serves for reporting of the actions taken by the Systems Administrators in order to improve and produce a system capable of performing for the next three years of ATLAS data taking.
Pradeepkiran, Jangampalli Adi; Sainath, Sri Bhashyam; Kumar, Konidala Kranthi; Bhaskar, Matcha
2015-01-01
Brucella melitensis 16M is a Gram-negative coccobacillus that infects both animals and humans. It causes a disease known as brucellosis, which is characterized by acute febrile illness in humans and causes abortions in livestock. To prevent and control brucellosis, identification of putative drug targets is crucial. The present study aimed to identify drug targets in B. melitensis 16M by using a subtractive genomic approach. We used available database repositories (Database of Essential Genes, Kyoto Encyclopedia of Genes and Genomes Automatic Annotation Server, and Kyoto Encyclopedia of Genes and Genomes) to identify putative genes that are nonhomologous to humans and essential for pathogen B. melitensis 16M. The results revealed that among 3 Mb genome size of pathogen, 53 putative characterized and 13 uncharacterized hypothetical genes were identified; further, from Basic Local Alignment Search Tool protein analysis, one hypothetical protein showed a close resemblance (50%) to Silicibacter pomeroyi DUF1285 family protein (2RE3). A further homology model of the target was constructed using MODELLER 9.12 and optimized through variable target function method by molecular dynamics optimization with simulating annealing. The stereochemical quality of the restrained model was evaluated by PROCHECK, VERIFY-3D, ERRAT, and WHATIF servers. Furthermore, structure-based virtual screening was carried out against the predicted active site of the respective protein using the glycerol structural analogs from the PubChem database. We identified five best inhibitors with strong affinities, stable interactions, and also with reliable drug-like properties. Hence, these leads might be used as the most effective inhibitors of modeled protein. The outcome of the present work of virtual screening of putative gene targets might facilitate design of potential drugs for better treatment against brucellosis. PMID:25834405
On the Selection of Models for Runtime Prediction of System Resources
NASA Astrophysics Data System (ADS)
Casolari, Sara; Colajanni, Michele
Applications and services delivered through large Internet Data Centers are now feasible thanks to network and server improvement, but also to virtualization, dynamic allocation of resources and dynamic migrations. The large number of servers and resources involved in these systems requires autonomic management strategies because no amount of human administrators would be capable of cloning and migrating virtual machines in time, as well as re-distributing or re-mapping the underlying hardware. At the basis of most autonomic management decisions, there is the need of evaluating own global behavior and change it when the evaluation indicates that they are not accomplishing what they were intended to do or some relevant anomalies are occurring. Decisions algorithms have to satisfy different time scales constraints. In this chapter we are interested to short-term contexts where runtime prediction models work on the basis of time series coming from samples of monitored system resources, such as disk, CPU and network utilization. In similar environments, we have to address two main issues. First, original time series are affected by limited predictability because measurements are characterized by noises due to system instability, variable offered load, heavy-tailed distributions, hardware and software interactions. Moreover, there is no existing criteria that can help us to choose a suitable prediction model and related parameters with the purpose of guaranteeing an adequate prediction quality. In this chapter, we evaluate the impact that different choices on prediction models have on different time series, and we suggest how to treat input data and whether it is convenient to choose the parameters of a prediction model in a static or dynamic way. Our conclusions are supported by a large set of analyses on realistic and synthetic data traces.
Slot Machines: Pursuing Responsible Gaming Practices for Virtual Reels and Near Misses
ERIC Educational Resources Information Center
Harrigan, Kevin A.
2009-01-01
Since 1983, slot machines in North America have used a computer and virtual reels to determine the odds. Since at least 1988, a technique called clustering has been used to create a high number of near misses, failures that are close to wins. The result is that what the player sees does not represent the underlying probabilities and randomness,…
Terascale Cluster for Advanced Turbulent Combustion Simulations
2008-07-25
the system We have given the name CATS (for Combustion And Turbulence Simulator) to the terascale system that was obtained through this grant. CATS ...lnfiniBand interconnect. CATS includes an interactive login node and a file server, each holding in excess of 1 terabyte of file storage. The 35 active...compute nodes of CATS enable us to run up to 140-core parallel MPI batch jobs; one node is reserved to run the scheduler. CATS is operated and
Analysis and Research on Spatial Data Storage Model Based on Cloud Computing Platform
NASA Astrophysics Data System (ADS)
Hu, Yong
2017-12-01
In this paper, the data processing and storage characteristics of cloud computing are analyzed and studied. On this basis, a cloud computing data storage model based on BP neural network is proposed. In this data storage model, it can carry out the choice of server cluster according to the different attributes of the data, so as to complete the spatial data storage model with load balancing function, and have certain feasibility and application advantages.
Measuring the Influence of Mainstream Media on Twitter Users
2014-07-01
dataset or called from a Java code. Weka contains tools for data pre-processing, classification, regression, clustering, association rules, and...server at CAU. The command line to start Weka is: java -jar /opt/weka-3-6-9/weka.jar & The first window that appears is the Weka’s graphical user...website hosts all detailed information at the fedora website at1. We chose the 140dev streaming API to store the tweets into our fedora using MySQL
SPR online: creating, maintaining, and distributing a virtual professional society on the Internet.
D'Alessandro, M P; Galvin, J R
1998-01-01
SPR Online (http:@www.pedrad.org) is a recently developed digital representation of the Society for Pediatric Radiology (SPR) that enables physicians to access pertinent information and services on the Internet. SPR Online was organized on the basis of the five main services of the SPR, which include Administration, Patient Care, Education, Research, and Meetings. For each service, related content from the SPR was digitized and placed onto SPR Online. Usage over a 12-month period was evaluated with server log file analysis. A total of 3,209 users accessed SPR Online, viewing 11,246 pages of information. A wide variety of information was accessed, with that from the Education, Administration, and Meetings services being the most popular. Fifteen percent of users came from foreign countries. As a virtual professional society, SPR Online greatly enhances the power and scope of the SPR and has proved to be a popular resource, meeting the diverse information needs of an international community of pediatric radiologists.
Covalent Docking of Large Libraries for the Discovery of Chemical Probes
London, Nir; Miller, Rand M.; Krishnan, Shyam; Uchida, Kenji; Irwin, John J.; Eidam, Oliv; Gibold, Lucie; Cimermančič, Peter; Bonnet, Richard; Shoichet, Brian K.; Taunton, Jack
2014-01-01
Chemical probes that form a covalent bond with a protein target often show enhanced selectivity, potency, and utility for biological studies. Despite these advantages, protein-reactive compounds are usually avoided in high-throughput screening campaigns. Here we describe a general method (DOCKovalent) for screening large virtual libraries of electrophilic small molecules. We apply this method prospectively to discover reversible covalent fragments that target distinct protein nucleophiles, including the catalytic serine of AmpC β-lactamase and noncatalytic cysteines in RSK2, MSK1, and JAK3 kinases. We identify submicromolar to low-nanomolar hits with high ligand efficiency, cellular activity and selectivity, including the first reported reversible covalent inhibitors of JAK3. Crystal structures of inhibitor complexes with AmpC and RSK2 confirm the docking predictions and guide further optimization. As covalent virtual screening may have broad utility for the rapid discovery of chemical probes, we have made the method freely available through an automated web server (http://covalent.docking.org). PMID:25344815
Covalent docking of large libraries for the discovery of chemical probes.
London, Nir; Miller, Rand M; Krishnan, Shyam; Uchida, Kenji; Irwin, John J; Eidam, Oliv; Gibold, Lucie; Cimermančič, Peter; Bonnet, Richard; Shoichet, Brian K; Taunton, Jack
2014-12-01
Chemical probes that form a covalent bond with a protein target often show enhanced selectivity, potency and utility for biological studies. Despite these advantages, protein-reactive compounds are usually avoided in high-throughput screening campaigns. Here we describe a general method (DOCKovalent) for screening large virtual libraries of electrophilic small molecules. We apply this method prospectively to discover reversible covalent fragments that target distinct protein nucleophiles, including the catalytic serine of AmpC β-lactamase and noncatalytic cysteines in RSK2, MSK1 and JAK3 kinases. We identify submicromolar to low-nanomolar hits with high ligand efficiency, cellular activity and selectivity, including what are to our knowledge the first reported reversible covalent inhibitors of JAK3. Crystal structures of inhibitor complexes with AmpC and RSK2 confirm the docking predictions and guide further optimization. As covalent virtual screening may have broad utility for the rapid discovery of chemical probes, we have made the method freely available through an automated web server (http://covalent.docking.org/).
Prediction based proactive thermal virtual machine scheduling in green clouds.
Kinger, Supriya; Kumar, Rajesh; Sharma, Anju
2014-01-01
Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
The virtual supermarket: An innovative research tool to study consumer food purchasing behaviour
2011-01-01
Background Economic interventions in the food environment are expected to effectively promote healthier food choices. However, before introducing them on a large scale, it is important to gain insight into the effectiveness of economic interventions and peoples' genuine reactions to price changes. Nonetheless, because of complex implementation issues, studies on price interventions are virtually non-existent. This is especially true for experiments undertaken in a retail setting. We have developed a research tool to study the effects of retail price interventions in a virtual-reality setting: the Virtual Supermarket. This paper aims to inform researchers about the features and utilization of this new software application. Results The Virtual Supermarket is a Dutch-developed three-dimensional software application in which study participants can shop in a manner comparable to a real supermarket. The tool can be used to study several food pricing and labelling strategies. The application base can be used to build future extensions and could be translated into, for example, an English-language version. The Virtual Supermarket contains a front-end which is seen by the participants, and a back-end that enables researchers to easily manipulate research conditions. The application keeps track of time spent shopping, number of products purchased, shopping budget, total expenditures and answers on configurable questionnaires. All data is digitally stored and automatically sent to a web server. A pilot study among Dutch consumers (n = 66) revealed that the application accurately collected and stored all data. Results from participant feedback revealed that 83% of the respondents considered the Virtual Supermarket easy to understand and 79% found that their virtual grocery purchases resembled their regular groceries. Conclusions The Virtual Supermarket is an innovative research tool with a great potential to assist in gaining insight into food purchasing behaviour. The application can be obtained via an URL and is freely available for academic use. The unique features of the tool include the fact that it enables researchers to easily modify research conditions and in this way study different types of interventions in a retail environment without a complex implementation process. Finally, it also maintains researcher independence and avoids conflicts of interest that may arise from industry collaboration. PMID:21787391
The virtual supermarket: an innovative research tool to study consumer food purchasing behaviour.
Waterlander, Wilma E; Scarpa, Michael; Lentz, Daisy; Steenhuis, Ingrid H M
2011-07-25
Economic interventions in the food environment are expected to effectively promote healthier food choices. However, before introducing them on a large scale, it is important to gain insight into the effectiveness of economic interventions and peoples' genuine reactions to price changes. Nonetheless, because of complex implementation issues, studies on price interventions are virtually non-existent. This is especially true for experiments undertaken in a retail setting. We have developed a research tool to study the effects of retail price interventions in a virtual-reality setting: the Virtual Supermarket. This paper aims to inform researchers about the features and utilization of this new software application. The Virtual Supermarket is a Dutch-developed three-dimensional software application in which study participants can shop in a manner comparable to a real supermarket. The tool can be used to study several food pricing and labelling strategies. The application base can be used to build future extensions and could be translated into, for example, an English-language version. The Virtual Supermarket contains a front-end which is seen by the participants, and a back-end that enables researchers to easily manipulate research conditions. The application keeps track of time spent shopping, number of products purchased, shopping budget, total expenditures and answers on configurable questionnaires. All data is digitally stored and automatically sent to a web server. A pilot study among Dutch consumers (n = 66) revealed that the application accurately collected and stored all data. Results from participant feedback revealed that 83% of the respondents considered the Virtual Supermarket easy to understand and 79% found that their virtual grocery purchases resembled their regular groceries. The Virtual Supermarket is an innovative research tool with a great potential to assist in gaining insight into food purchasing behaviour. The application can be obtained via an URL and is freely available for academic use. The unique features of the tool include the fact that it enables researchers to easily modify research conditions and in this way study different types of interventions in a retail environment without a complex implementation process. Finally, it also maintains researcher independence and avoids conflicts of interest that may arise from industry collaboration.
antiSMASH 3.0-a comprehensive resource for the genome mining of biosynthetic gene clusters.
Weber, Tilmann; Blin, Kai; Duddela, Srikanth; Krug, Daniel; Kim, Hyun Uk; Bruccoleri, Robert; Lee, Sang Yup; Fischbach, Michael A; Müller, Rolf; Wohlleben, Wolfgang; Breitling, Rainer; Takano, Eriko; Medema, Marnix H
2015-07-01
Microbial secondary metabolism constitutes a rich source of antibiotics, chemotherapeutics, insecticides and other high-value chemicals. Genome mining of gene clusters that encode the biosynthetic pathways for these metabolites has become a key methodology for novel compound discovery. In 2011, we introduced antiSMASH, a web server and stand-alone tool for the automatic genomic identification and analysis of biosynthetic gene clusters, available at http://antismash.secondarymetabolites.org. Here, we present version 3.0 of antiSMASH, which has undergone major improvements. A full integration of the recently published ClusterFinder algorithm now allows using this probabilistic algorithm to detect putative gene clusters of unknown types. Also, a new dereplication variant of the ClusterBlast module now identifies similarities of identified clusters to any of 1172 clusters with known end products. At the enzyme level, active sites of key biosynthetic enzymes are now pinpointed through a curated pattern-matching procedure and Enzyme Commission numbers are assigned to functionally classify all enzyme-coding genes. Additionally, chemical structure prediction has been improved by incorporating polyketide reduction states. Finally, in order for users to be able to organize and analyze multiple antiSMASH outputs in a private setting, a new XML output module allows offline editing of antiSMASH annotations within the Geneious software. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Study on Global GIS architecture and its key technologies
NASA Astrophysics Data System (ADS)
Cheng, Chengqi; Guan, Li; Lv, Xuefeng
2009-09-01
Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.
Study on Global GIS architecture and its key technologies
NASA Astrophysics Data System (ADS)
Cheng, Chengqi; Guan, Li; Lv, Xuefeng
2010-11-01
Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.
Multi-agent grid system Agent-GRID with dynamic load balancing of cluster nodes
NASA Astrophysics Data System (ADS)
Satymbekov, M. N.; Pak, I. T.; Naizabayeva, L.; Nurzhanov, Ch. A.
2017-12-01
In this study the work presents the system designed for automated load balancing of the contributor by analysing the load of compute nodes and the subsequent migration of virtual machines from loaded nodes to less loaded ones. This system increases the performance of cluster nodes and helps in the timely processing of data. A grid system balances the work of cluster nodes the relevance of the system is the award of multi-agent balancing for the solution of such problems.
Zen and the Art of Virtual Observatory Maintenance
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2014-12-01
The NASA Science Mission Directive Science Plan stresses that the primary goals of Heliophysics research focus on the understanding of the Sun's influence on the Earth and other bodies in the solar system. The NASA Heliophysics Division has adopted the Virtual Observatory, or VxO, concept in order to enable scientists to easily discover and access all data products relevant to these goals via web portals that act as clearinghouses. Furthermore, Heliophysics discipline scientists have defined the Space Physics Archive Search and Extract (SPASE) metadata schema in order to describe the contents of such applicable data products with detail extending all the way down to the parameter level. One SPASE metadata description file must be written to describe each data product at the global level. And the collection of such data product metadata description files, stored in repositories, provides the searchable content that the VxO web sites require in order to match the list of products to the unique needs of each researcher. The VxO metadata repository content also allows one to provide links to each unique data file contained in the full complement of files on a per data product basis. These links are contained within SPASE "Granule" description files and permit uniform access, worldwide, regardless of data server location thus permitting the VxO clearinghouse capability. The VxO concept is sound in theory but difficult in practice given that the Heliophysics data environment is diverse, ever expanding, and volatile. Thus, it is imperative to update the VxO metadata repositories in order to provide a complete, accurate, and current portrayal of the data environment. Such attention to detail is not a VxO desire but a necessity in order to support Heliophysics researchers and foster VxO user loyalty. An application of these basic tenets to the construction of a VxO repository dedicated to providing access to the CDF-formatted data collection hosted on the NASA Goddard CDAWeb data server. Note that the CDF format is self-describing and thus it provides a source of information for initiating SPASE metadata description at the data product level. Also, the CDAWeb data server provides high-quality data product tracking down to the individual data file level permitting easy updating of SPASE Granule metadata.
Planning and management of cloud computing networks
NASA Astrophysics Data System (ADS)
Larumbe, Federico
The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a comprehensive vision. The first question to be solved is what are the optimal data center locations. We found that the location of each data center has a big impact on cost, QoS, power consumption, and greenhouse gas emissions. An optimization problem with a multi-criteria objective function is proposed to decide jointly the optimal location of data centers and software components, link capacities, and information routing. Once the network planning has been analyzed, the problem of dynamic resource provisioning in real time is addressed. In this context, virtualization is a key technique in cloud computing because each server can be shared by multiple Virtual Machines (VMs) and the total power consumption can be reduced. In the same line of location problems, we propose a Green Cloud Broker that optimizes VM placement across multiple data centers. In fact, when multiple data centers are considered, response time can be reduced by placing VMs close to users, cost can be minimized, power consumption can be optimized by using energy efficient data centers, and CO2 emissions can be decreased by choosing data centers provided with renewable energy sources. The third stage of the analysis is the short-term management of a cloud data center. In particular, a method is proposed to assign VMs to servers by considering communication traffic among VMs. Cloud data centers receive new applications over time and these applications need on-demand resource provisioning. Each application is composed of multiple types of VMs that interact among themselves. A program called scheduler must place each new VM in a server and that impacts the QoS and power consumption. Our method places VMs that communicate among themselves in servers that are close to each other in the network topology, thus reducing communication delay and increasing the throughput available among VMs. Furthermore, the power consumption of each type of server is considered and the most efficient ones are chosen to place the VMs. The number of VMs of each application can be dynamically changed to match the workload and servers not needed in a particular period can be suspended to save energy. The methodology developed is based on Mixed Integer Programming (MIP) models to formalize the problems and use state of the art optimization solvers. Then, heuristics are developed to solve cases with more than 1,000 potential data center locations for the planning problem, 1,000 nodes for the cloud broker, and 128,000 servers for the VM placement problem. Solutions with very short optimality gaps, between 0% and 1.95%, are obtained, and execution time in the order of minutes for the planning problem and less than a second for real time cases. We consider that this thesis on resource provisioning of cloud computing networks includes important contributions on this research area, and innovative commercial applications based on the proposed methods have promising future.
Kowiel, Marcin; Brzezinski, Dariusz; Jaskolski, Mariusz
2016-01-01
The refinement of macromolecular structures is usually aided by prior stereochemical knowledge in the form of geometrical restraints. Such restraints are also used for the flexible sugar-phosphate backbones of nucleic acids. However, recent highly accurate structural studies of DNA suggest that the phosphate bond angles may have inadequate description in the existing stereochemical dictionaries. In this paper, we analyze the bonding deformations of the phosphodiester groups in the Cambridge Structural Database, cluster the studied fragments into six conformation-related categories and propose a revised set of restraints for the O-P-O bond angles and distances. The proposed restraints have been positively validated against data from the Nucleic Acid Database and an ultrahigh-resolution Z-DNA structure in the Protein Data Bank. Additionally, the manual classification of PO4 geometry is compared with geometrical clusters automatically discovered by machine learning methods. The machine learning cluster analysis provides useful insights and a practical example for general applications of clustering algorithms for automatic discovery of hidden patterns of molecular geometry. Finally, we describe the implementation and application of a public-domain web server for automatic generation of the proposed restraints. PMID:27521371
Rey, Beatriz; Rodriguez-Pujadas, Aina; Breton-Lopez, Juani; Barros-Loscertales, Alfonso; Baños, Rosa M; Botella, Cristina; Alcañiz, Mariano; Avila, Cesar
2014-01-01
Background To date, still images or videos of real animals have been used in functional magnetic resonance imaging protocols to evaluate the brain activations associated with small animals’ phobia. Objective The objective of our study was to evaluate the brain activations associated with small animals’ phobia through the use of virtual environments. This context will have the added benefit of allowing the subject to move and interact with the environment, giving the subject the illusion of being there. Methods We have analyzed the brain activation in a group of phobic people while they navigated in a virtual environment that included the small animals that were the object of their phobia. Results We have found brain activation mainly in the left occipital inferior lobe (P<.05 corrected, cluster size=36), related to the enhanced visual attention to the phobic stimuli; and in the superior frontal gyrus (P<.005 uncorrected, cluster size=13), which is an area that has been previously related to the feeling of self-awareness. Conclusions In our opinion, these results demonstrate that virtual stimulus can enhance brain activations consistent with previous studies with still images, but in an environment closer to the real situation the subject would face in their daily lives. PMID:25654753
Data Transport Subsystem - The SFOC glue
NASA Technical Reports Server (NTRS)
Parr, Stephen J.
1988-01-01
The design and operation of the Data Transport Subsystem (DTS) for the JPL Space Flight Operation Center (SFOC) are described. The SFOC is the ground data system under development to serve interplanetary space probes; in addition to the DTS, it comprises a ground interface facility, a telemetry-input subsystem, data monitor and display facilities, and a digital TV system. DTS links the other subsystems via an ISO OSI presentation layer and an LAN. Here, particular attention is given to the DTS services and service modes (virtual circuit, datagram, and broadcast), the DTS software architecture, the logical-name server, the role of the integrated AI library, and SFOC as a distributed system.
Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations
NASA Technical Reports Server (NTRS)
Noyes, Matthew A.
2013-01-01
This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.
Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture
NASA Technical Reports Server (NTRS)
Fiene, Bruce F.
1994-01-01
The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.
A microbased shared virtual world prototype
NASA Technical Reports Server (NTRS)
Pitts, Gerald; Robinson, Mark; Strange, Steve
1993-01-01
Virtual reality (VR) allows sensory immersion and interaction with a computer-generated environment. The user adopts a physical interface with the computer, through Input/Output devices such as a head-mounted display, data glove, mouse, keyboard, or monitor, to experience an alternate universe. What this means is that the computer generates an environment which, in its ultimate extension, becomes indistinguishable from the real world. 'Imagine a wraparound television with three-dimensional programs, including three-dimensional sound, and solid objects that you can pick up and manipulate, even feel with your fingers and hands.... 'Imagine that you are the creator as well as the consumer of your artificial experience, with the power to use a gesture or word to remold the world you see and hear and feel. That part is not fiction... three-dimensional computer graphics, input/output devices, computer models that constitute a VR system make it possible, today, to immerse yourself in an artificial world and to reach in and reshape it.' Our research's goal was to propose a feasibility experiment in the construction of a networked virtual reality system, making use of current personal computer (PC) technology. The prototype was built using Borland C compiler, running on an IBM 486 33 MHz and a 386 33 MHz. Each game currently is represented as an IPX client on a non-dedicated Novell server. We initially posed the two questions: (1) Is there a need for networked virtual reality? (2) In what ways can the technology be made available to the most people possible?
NASA Astrophysics Data System (ADS)
ZuHone, J. A.; Kowalik, K.; Öhman, E.; Lau, E.; Nagai, D.
2018-01-01
We present the “Galaxy Cluster Merger Catalog.” This catalog provides an extensive suite of mock observations and related data for N-body and hydrodynamical simulations of galaxy cluster mergers and clusters from cosmological simulations. These mock observations consist of projections of a number of important observable quantities in several different wavebands, as well as along different lines of sight through each simulation domain. The web interface to the catalog consists of easily browsable images over epoch and projection direction, as well as download links for the raw data and a JS9 interface for interactive data exploration. The data are presented within a consistent format so that comparison between simulations is straightforward. All of the data products are provided in the standard Flexible Image Transport System file format. The data are being stored on the yt Hub (http://hub.yt), which allows for remote access and analysis using a Jupyter notebook server. Future versions of the catalog will include simulations from a number of research groups and a variety of research topics related to the study of interactions of galaxy clusters with each other and with their member galaxies. The catalog is located at http://gcmc.hub.yt.
[Parallel virtual reality visualization of extreme large medical datasets].
Tang, Min
2010-04-01
On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.
Desktop supercomputer: what can it do?
NASA Astrophysics Data System (ADS)
Bogdanov, A.; Degtyarev, A.; Korkhov, V.
2017-12-01
The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.
Brunetaud, Jean Marc; Leroy, Nicolas; Pelayo, Sylvia; Wascat, Caroline; Renard, Jean Marie; Prin, Lionel; Beuscart-Zéphir, Marie Catherine
2003-01-01
The UMVF aims at helping medical students during their normal curriculum via the facilities provided by Internet based techniques. This paper describes a comparative assessment of two interfaces delivering a multimedia course: a conventional web server (WS) and an integrated e-learning platform in the form of a Virtual Campus (VC). Eleven students were arbitrarily divided into two groups. We used a qualitative method for comparing their acceptance of the on line course provided by the two different interfaces. The two groups were globally satisfied. However, a decrease in satisfaction was noted at the end of the experimentation in the VC group. This may be explained by a more complex Graphical User Interface (GUI) of the VC and some constraints which do not exist with the WS. The current e-learning platforms are probably not optimised for working conditions where presential and virtual activities are mixed. We think that a new type of "light" platforms should be developed for these specific working conditions. Students of the two groups also had limitations about the multimedia environment. They may change their opinion if they get more accustomed with the multimedia environment and if their teachers make a more adequate use of the multimedia techniques.
Tan, Chee-Heng; Teh, Ying-Wah
2013-08-01
The main obstacles in mass adoption of cloud computing for database operations in healthcare organization are the data security and privacy issues. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to actual data for diagnostic and remediation purposes. The proposed mechanisms utilized the hypothetical data from TPC-H benchmark, to achieve 2 objectives. First, the underlying hardware performance and consistency is monitored via a control system, which is constructed using TPC-H queries. Second, the mechanism to construct stress-testing scenario is envisaged in the host, using a single or combination of TPC-H queries, so that the resource threshold point can be verified, if the virtual machine is still capable of serving critical transactions at this constraining juncture. This threshold point uses server run queue size as input parameter, and it serves 2 purposes: It provides the boundary threshold to the control system, so that periodic learning of the synthetic data sets for performance evaluation does not reach the host's constraint level. Secondly, when the host undergoes hardware change, stress-testing scenarios are simulated in the host by loading up to this resource threshold level, for subsequent response time verification from real and critical transactions.
Global Software Development with Cloud Platforms
NASA Astrophysics Data System (ADS)
Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya
Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.
NASA Astrophysics Data System (ADS)
Herbuś, K.; Ociepka, P.
2017-08-01
In the work is examined the sequential control system of a technological line in the form of the final part of a system of an internal transport. The process of designing this technological line using the computer-aided approach ran concurrently in two different program environments. In the Mechatronics Concept Designer module of the PLM Siemens NX software was developed the 3D model of the technological line prepared for verification the logic interrelations implemented in the control system. For this purpose, from the whole system of the technological line, it was distinguished the sub-system of actuators and sensors, because their correct operation determines the correct operation of the whole system. Whereas in the application of the virtual controller have been implemented the algorithms of work of the planned line. Then both program environments have been integrated using the OPC server, which enables the exchange of data between the considered systems. The data on the state of the object and the data defining the way and sequence of operation of the technological line are exchanged between the virtual controller and the 3D model of the technological line in real time.
Immersive Earth Science: Data Visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Skolnik, S.; Ramirez-Linan, R.
2017-12-01
Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.
2D and 3D virtual interactive laboratories of physics on Unity platform
NASA Astrophysics Data System (ADS)
González, J. D.; Escobar, J. H.; Sánchez, H.; De la Hoz, J.; Beltrán, J. R.
2017-12-01
Using the cross-platform game engine Unity, we develop virtual laboratories for PC, consoles, mobile devices and website as an innovative tool to study physics. There is extensive uptake of ICT in the teaching of science and its impact on the learning, and considering the limited availability of laboratories for physics teaching and the difficulties this causes in the learning of school students, we design the virtual laboratories to enhance studentâĂŹs knowledge of concepts in physics. To achieve this goal, we use Unity due to provide support bump mapping, reflection mapping, parallax mapping, dynamics shadows using shadows maps, full-screen post-processing effects and render-to-texture. Unity can use the best variant for the current video hardware and, if none are compatible, to use an alternative shader that may sacrifice features for performance. The control over delivery to mobile devices, web browsers, consoles and desktops is the main reason Unity is the best option among the same kind cross-platform. Supported platforms include Android, Apple TV, Linux, iOS, Nintendo 3DS line, macOS, PlayStation 4, Windows Phone 8, Wii but also an asset server and Nvidia’s PhysX physics engine which is the most relevant tool on Unity for our PhysLab.
NASA Astrophysics Data System (ADS)
Yi, Sang-oh; Sung, Yun-mo; Ah, Ki-duk; Oh, Hong-jong; Byon, Do-young; Lim, Hyung-chul; Chung, Moon-hee; Je, Do-heung; Jung, Tae-hyun
2016-12-01
The Sejong station is a part of the SGOC (Space Geodetic Observation Center) which belongs to the NGII (National Geographic Information Institute). This report will briefly describe the Sejong S/X system issues that we need to improve, establishment of a server cluster for S/W correlation, and installation of the ARGO-M (mobile SLR system, 40 cm in diameter) which is developed by KASI (Korea Astronomy and Space Science Institute) at the Sejong station. Construction of the Korea VLBI Network KVNG (Korea VLBI Network for Geodesy) is currently underway.
Demonstration of measurement-only blind quantum computing
NASA Astrophysics Data System (ADS)
Greganti, Chiara; Roehsner, Marie-Christine; Barz, Stefanie; Morimae, Tomoyuki; Walther, Philip
2016-01-01
Blind quantum computing allows for secure cloud networks of quasi-classical clients and a fully fledged quantum server. Recently, a new protocol has been proposed, which requires a client to perform only measurements. We demonstrate a proof-of-principle implementation of this measurement-only blind quantum computing, exploiting a photonic setup to generate four-qubit cluster states for computation and verification. Feasible technological requirements for the client and the device-independent blindness make this scheme very applicable for future secure quantum networks.
Poster: Building a Large Tiled-Display Cluster
2012-10-01
graphics cards ( Nvidia Quadro FX 5800), and each graphics ∗e-mail: mark.livingston@nrl.navy.mil †e-mail: jonathan.decker@nrl.navy.mil card in a display...such as DisplayPort and HDMI (see: Nvidia Quadro 6000). We recommend these formats because they are much easier to plug-and-play. 3.4 Leverage Open...will find yourself with all the issues related to owning a server room. Today, there are a number of companies offering turn-key so- lutions for tiled
From virtual clustering analysis to self-consistent clustering analysis: a mathematical study
NASA Astrophysics Data System (ADS)
Tang, Shaoqiang; Zhang, Lei; Liu, Wing Kam
2018-03-01
In this paper, we propose a new homogenization algorithm, virtual clustering analysis (VCA), as well as provide a mathematical framework for the recently proposed self-consistent clustering analysis (SCA) (Liu et al. in Comput Methods Appl Mech Eng 306:319-341, 2016). In the mathematical theory, we clarify the key assumptions and ideas of VCA and SCA, and derive the continuous and discrete Lippmann-Schwinger equations. Based on a key postulation of "once response similarly, always response similarly", clustering is performed in an offline stage by machine learning techniques (k-means and SOM), and facilitates substantial reduction of computational complexity in an online predictive stage. The clear mathematical setup allows for the first time a convergence study of clustering refinement in one space dimension. Convergence is proved rigorously, and found to be of second order from numerical investigations. Furthermore, we propose to suitably enlarge the domain in VCA, such that the boundary terms may be neglected in the Lippmann-Schwinger equation, by virtue of the Saint-Venant's principle. In contrast, they were not obtained in the original SCA paper, and we discover these terms may well be responsible for the numerical dependency on the choice of reference material property. Since VCA enhances the accuracy by overcoming the modeling error, and reduce the numerical cost by avoiding an outer loop iteration for attaining the material property consistency in SCA, its efficiency is expected even higher than the recently proposed SCA algorithm.
Opal web services for biomedical applications.
Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W
2010-07-01
Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.
EON: software for long time simulations of atomic scale systems
NASA Astrophysics Data System (ADS)
Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme
2014-07-01
The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.
Li, Jun; Tai, Cui; Deng, Zixin; Zhong, Weihong; He, Yongqun; Ou, Hong-Yu
2017-01-10
VRprofile is a Web server that facilitates rapid investigation of virulence and antibiotic resistance genes, as well as extends these trait transfer-related genetic contexts, in newly sequenced pathogenic bacterial genomes. The used backend database MobilomeDB was firstly built on sets of known gene cluster loci of bacterial type III/IV/VI/VII secretion systems and mobile genetic elements, including integrative and conjugative elements, prophages, class I integrons, IS elements and pathogenicity/antibiotic resistance islands. VRprofile is thus able to co-localize the homologs of these conserved gene clusters using HMMer or BLASTp searches. With the integration of the homologous gene cluster search module with a sequence composition module, VRprofile has exhibited better performance for island-like region predictions than the other widely used methods. In addition, VRprofile also provides an integrated Web interface for aligning and visualizing identified gene clusters with MobilomeDB-archived gene clusters, or a variety set of bacterial genomes. VRprofile might contribute to meet the increasing demands of re-annotations of bacterial variable regions, and aid in the real-time definitions of disease-relevant gene clusters in pathogenic bacteria of interest. VRprofile is freely available at http://bioinfo-mml.sjtu.edu.cn/VRprofile. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Five years of experience teaching pathology to dental students using the WebMicroscope
2011-01-01
Background We describe development and evaluation of the user-friendly web based virtual microscopy - WebMicroscope for teaching and learning dental students basic and oral pathology. Traditional students microscopes were replaced by computer workstations. Methods The transition of the basic and oral pathology courses from light to virtual microscopy has been completed gradually over a five-year period. A pilot study was conducted in academic year 2005/2006 to estimate the feasibility of integrating virtual microscopy into a traditional light microscopy-based pathology course. The entire training set of glass slides was subsequently converted to virtual slides and placed on the WebMicroscope server. Giving access to fully digitized slides on the web with a browser and a viewer plug-in, the computer has become a perfect companion of the student. Results The study material consists now of over 400 fully digitized slides which covering 15 entities in basic and systemic pathology and 15 entities in oral pathology. Digitized slides are linked with still macro- and microscopic images, organized with clinical information into virtual cases and supplemented with text files, syllabus, PowerPoint presentations and animations on the web, serving additionally as material for individual studies. After their examinations, the students rated the use of the software, quality of the images, the ease of handling the images, and the effective use of virtual slides during the laboratory practicals. Responses were evaluated on a standardized scale. Because of the positive opinions and support from the students, the satisfaction surveys had shown a progressive improvement over the past 5 years. The WebMicroscope as a didactic tool for laboratory practicals was rated over 8 on a 1-10 scale for basic and systemic pathology and 9/10 for oral pathology especially as various students’ suggestions were implemented. Overall, the quality of the images was rated as very good. Conclusions An overwhelming majority of our students regarded a possibility of using virtual slides at their convenience as highly desirable. Our students and faculty consider the use of the virtual microscope for the study of basic as well as oral pathology as a significant improvement over the light microscope. PMID:21489183
JMS Proxy and C/C++ Client SDK
NASA Technical Reports Server (NTRS)
Wolgast, Paul; Pechkam, Paul
2007-01-01
JMS Proxy and C/C++ Client SDK (JMS signifies "Java messaging service" and "SDK" signifies "software development kit") is a software package for developing interfaces that enable legacy programs (here denoted "clients") written in the C and C++ languages to communicate with each other via a JMS broker. This package consists of two main components: the JMS proxy server component and the client C library SDK component. The JMS proxy server component implements a native Java process that receives and responds to requests from clients. This component can run on any computer that supports Java and a JMS client. The client C library SDK component is used to develop a JMS client program running in each affected C or C++ environment, without need for running a Java virtual machine in the affected computer. A C client program developed by use of this SDK has most of the quality-of-service characteristics of standard Java-based client programs, including the following: Durable subscriptions; Asynchronous message receipt; Such standard JMS message qualities as "TimeToLive," "Message Properties," and "DeliveryMode" (as the quoted terms are defined in previously published JMS documentation); and Automatic reconnection of a JMS proxy to a restarted JMS broker.
Zaman, Babar; Khandekar, Rajiv; Al Shahwan, Sami; Song, Jonathan; Al Jadaan, Ibrahim; Al Jiasim, Leyla; Owaydha, Ohood; Asghar, Nasira; Hijazi, Amar; Edward, Deepak P.
2014-01-01
In this brief communication, we present the steps used to establish a web-based congenital glaucoma registry at our institution. The contents of a case report form (CRF) were developed by a group of glaucoma subspecialists. Information Technology (IT) specialists used Lime Survey softwareTM to create an electronic CRF. A MY Structured Query Language (MySQL) server was used as a database with a virtual machine operating system. Two ophthalmologists and 2 IT specialists worked for 7 hours, and a biostatistician and a data registrar worked for 24 hours each to establish the electronic CRF. Using the CRF which was transferred to the Lime survey tool, and the MYSQL server application, data could be directly stored in spreadsheet programs that included Microsoft Excel, SPSS, and R-Language and queried in real-time. In a pilot test, clinical data from 80 patients with congenital glaucoma were entered into the registry and successful descriptive analysis and data entry validation was performed. A web-based disease registry was established in a short period of time in a cost-efficient manner using available resources and a team-based approach. PMID:24791112
Zaman, Babar; Khandekar, Rajiv; Al Shahwan, Sami; Song, Jonathan; Al Jadaan, Ibrahim; Al Jiasim, Leyla; Owaydha, Ohood; Asghar, Nasira; Hijazi, Amar; Edward, Deepak P
2014-01-01
In this brief communication, we present the steps used to establish a web-based congenital glaucoma registry at our institution. The contents of a case report form (CRF) were developed by a group of glaucoma subspecialists. Information Technology (IT) specialists used Lime Survey softwareTM to create an electronic CRF. A MY Structured Query Language (MySQL) server was used as a database with a virtual machine operating system. Two ophthalmologists and 2 IT specialists worked for 7 hours, and a biostatistician and a data registrar worked for 24 hours each to establish the electronic CRF. Using the CRF which was transferred to the Lime survey tool, and the MYSQL server application, data could be directly stored in spreadsheet programs that included Microsoft Excel, SPSS, and R-Language and queried in real-time. In a pilot test, clinical data from 80 patients with congenital glaucoma were entered into the registry and successful descriptive analysis and data entry validation was performed. A web-based disease registry was established in a short period of time in a cost-efficient manner using available resources and a team-based approach.
Usha, Talambedu; Goyal, Arvind Kumar; Lubna, Syed; Prashanth, Hp; Mohan, T Madhan; Pande, Veena; Middha, Sushil Kumar
2014-01-01
Punica granatum (family: Lythraceae) is mainly found in Iran, which is considered to be its primary centre of origin. Studies on pomegranate peel have revealed antioxidant, anti-inflammatory, anti- angiogenesis activities, with prevention of premature aging and reducing inflammation. In addition to this it is also useful in treating various diseases like diabetes, maintaining blood pressure and treatment of neoplasms such as prostate and breast cancer. In this study we identified anti-cancer targets of active compounds like corilagin (tannins), quercetin (flavonoids) and pseudopelletierine (alkaloids) present in pomegranate peel by employing dual reverse screening and binding analysis. The potent targets of the pomegranate peel were annotated by the PharmMapper and ReverseScreen 3D, then compared with targets identified from different Bioassay databases (NPACT and HIT's). Docking was then further employed using AutoDock pyrx and validated through discovery studio for studying molecular interactions. A number of potent anti-cancerous targets were attained from the PharmMapper server according to their fit score and from ReverseScreen 3D server according to decreasing 3D scores. The identified targets now need to be further validated through in vitro and in vivo studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, G.R.
1986-03-03
Prototypes of components of the Saguaro distributed operating system were implemented and the design of the entire system refined based on the experience. The philosophy behind Saguaro is to support the illusion of a single virtual machine while taking advantage of the concurrency and robustness that are possible in a network architecture. Within the system, these advantages are realized by the use of pools of server processes and decentralized allocation protocols. Potential concurrency and robustness are also made available to the user through low-cost mechanisms to control placement of executing commands and files, and to support semi-transparent file replication andmore » access. Another unique aspect of Saguaro is its extensive use of type system to describe user data such as files and to specify the types of arguments to commands and procedures. This enables the system to assist in type checking and leads to a user interface in which command-specific templates are available to facilitate command invocation. A mechanism, channels, is also provided to enable users to construct applications containing general graphs of communication processes.« less
Toyz: A framework for scientific analysis of large datasets and astronomical images
NASA Astrophysics Data System (ADS)
Moolekamp, F.; Mamajek, E.
2015-11-01
As the size of images and data products derived from astronomical data continues to increase, new tools are needed to visualize and interact with that data in a meaningful way. Motivated by our own astronomical images taken with the Dark Energy Camera (DECam) we present Toyz, an open source Python package for viewing and analyzing images and data stored on a remote server or cluster. Users connect to the Toyz web application via a web browser, making it a convenient tool for students to visualize and interact with astronomical data without having to install any software on their local machines. In addition it provides researchers with an easy-to-use tool that allows them to browse the files on a server and quickly view very large images (>2 Gb) taken with DECam and other cameras with a large FOV and create their own visualization tools that can be added on as extensions to the default Toyz framework.
Wang, Jia-Hong; Zhao, Ling-Feng; Lin, Pei; Su, Xiao-Rong; Chen, Shi-Jun; Huang, Li-Qiang; Wang, Hua-Feng; Zhang, Hai; Hu, Zhen-Fu; Yao, Kai-Tai; Huang, Zhong-Xi
2014-09-01
Identifying biological functions and molecular networks in a gene list and how the genes may relate to various topics is of considerable value to biomedical researchers. Here, we present a web-based text-mining server, GenCLiP 2.0, which can analyze human genes with enriched keywords and molecular interactions. Compared with other similar tools, GenCLiP 2.0 offers two unique features: (i) analysis of gene functions with free terms (i.e. any terms in the literature) generated by literature mining or provided by the user and (ii) accurate identification and integration of comprehensive molecular interactions from Medline abstracts, to construct molecular networks and subnetworks related to the free terms. http://ci.smu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Yang, W.; Min, M.; Bai, Y.; Lynnes, C.; Holloway, D.; Enloe, Y.; di, L.
2008-12-01
In the past few years, there have been growing interests, among major earth observing satellite (EOS) data providers, in serving data through the interoperable Web Coverage Service (WCS) interface protocol, developed by the Open Geospatial Consortium (OGC). The interface protocol defined in WCS specifications allows client software to make customized requests of multi-dimensional EOS data, including spatial and temporal subsetting, resampling and interpolation, and coordinate reference system (CRS) transformation. A WCS server describes an offered coverage, i.e., a data product, through a response to a client's DescribeCoverage request. The description includes the offered coverage's spatial/temporal extents and resolutions, supported CRSs, supported interpolation methods, and supported encoding formats. Based on such information, a client can request the entire or a subset of coverage in any spatial/temporal resolutions and in any one of the supported CRSs, formats, and interpolation methods. When implementing a WCS server, a data provider has different approaches to present its data holdings to clients. One of the most straightforward, and commonly used, approaches is to offer individual physical data files as separate coverages. Such implementation, however, will result in too many offered coverages for large data holdings and it also cannot fully present the relationship among different, but spatially and/or temporally associated, data files. It is desirable to disconnect offered coverages from physical data files so that the former is more coherent, especially in spatial and temporal domains. Therefore, some servers offer one single coverage for a set of spatially coregistered time series data files such as a daily global precipitation coverage linked to many global single- day precipitation files; others offer one single coverage for multiple temporally coregistered files together forming a large spatial extent. In either case, a server needs to assemble an output coverage real-time by combining potentially large number of physical files, which can be operationally difficult. The task becomes more challenging if an offered coverage involves spatially and temporally un-registered physical files. In this presentation, we will discuss issues and lessons learned in providing NASA's AIRS Level 2 atmospheric products, which are in satellite swath CRS and in 6-minute segment granule files, as virtual global coverages. We"ll discuss the WCS server's on- the-fly georectification, mosaicking, quality screening, performance, and scalability.
NASA Astrophysics Data System (ADS)
Bjorklund, E.
1994-12-01
In the 1970s, when computers were memory limited, operating system designers created the concept of "virtual memory", which gave users the ability to address more memory than physically existed. In the 1990s, many large control systems have the potential of becoming data limited. We propose that many of the principles behind virtual memory systems (working sets, locality, caching and clustering) can also be applied to data-limited systems, creating, in effect, "virtual data systems". At the Los Alamos National Laboratory's Clinton P. Anderson Meson Physics Facility (LAMPF), we have applied these principles to a moderately sized (10 000 data points) data acquisition and control system. To test the principles, we measured the system's performance during tune-up, production, and maintenance periods. In this paper, we present a general discussion of the principles of a virtual data system along with some discussion of our own implementation and the results of our performance measurements.
NASA Astrophysics Data System (ADS)
Wagner, R.; Norman, M. L.
Here we present a working example of a Basic SkyNode serving theoretical data. The data is taken from the Simulated Cluster Archive (SCA), a set of simulated X-ray clusters, where each cluster was computed using four different physics models. The LCA Theory SkyNode (LCATheory) tables contain columns of the integrated physical properties of the clusters at various redshifts. The ease of setting up a Theory SkyNode is an important result, because it represents a clear way to present theory data to the Virtual Observatory. Also, our Theory SkyNode provides a prototype for additional simulated object catalogs, which will be created from other simulations by our group, and hopefully others.
The Cloud Area Padovana: from pilot to production
NASA Astrophysics Data System (ADS)
Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.
2017-10-01
The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.
NASA Astrophysics Data System (ADS)
Choe, Giseok; Nang, Jongho
The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.
2012-01-01
Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423
CRISPRFinder: a web tool to identify clustered regularly interspaced short palindromic repeats.
Grissa, Ibtissem; Vergnaud, Gilles; Pourcel, Christine
2007-07-01
Clustered regularly interspaced short palindromic repeats (CRISPRs) constitute a particular family of tandem repeats found in a wide range of prokaryotic genomes (half of eubacteria and almost all archaea). They consist of a succession of highly conserved regions (DR) varying in size from 23 to 47 bp, separated by similarly sized unique sequences (spacer) of usually viral origin. A CRISPR cluster is flanked on one side by an AT-rich sequence called the leader and assumed to be a transcriptional promoter. Recent studies suggest that this structure represents a putative RNA-interference-based immune system. Here we describe CRISPRFinder, a web service offering tools to (i) detect CRISPRs including the shortest ones (one or two motifs); (ii) define DRs and extract spacers; (iii) get the flanking sequences to determine the leader; (iv) blast spacers against Genbank database and (v) check if the DR is found elsewhere in prokaryotic sequenced genomes. CRISPRFinder is freely accessible at http://crispr.u-psud.fr/Server/CRISPRfinder.php.
Cohen, Mitchell J; Grossman, Adam D; Morabito, Diane; Knudson, M Margaret; Butte, Atul J; Manley, Geoffrey T
2010-01-01
Advances in technology have made extensive monitoring of patient physiology the standard of care in intensive care units (ICUs). While many systems exist to compile these data, there has been no systematic multivariate analysis and categorization across patient physiological data. The sheer volume and complexity of these data make pattern recognition or identification of patient state difficult. Hierarchical cluster analysis allows visualization of high dimensional data and enables pattern recognition and identification of physiologic patient states. We hypothesized that processing of multivariate data using hierarchical clustering techniques would allow identification of otherwise hidden patient physiologic patterns that would be predictive of outcome. Multivariate physiologic and ventilator data were collected continuously using a multimodal bioinformatics system in the surgical ICU at San Francisco General Hospital. These data were incorporated with non-continuous data and stored on a server in the ICU. A hierarchical clustering algorithm grouped each minute of data into 1 of 10 clusters. Clusters were correlated with outcome measures including incidence of infection, multiple organ failure (MOF), and mortality. We identified 10 clusters, which we defined as distinct patient states. While patients transitioned between states, they spent significant amounts of time in each. Clusters were enriched for our outcome measures: 2 of the 10 states were enriched for infection, 6 of 10 were enriched for MOF, and 3 of 10 were enriched for death. Further analysis of correlations between pairs of variables within each cluster reveals significant differences in physiology between clusters. Here we show for the first time the feasibility of clustering physiological measurements to identify clinically relevant patient states after trauma. These results demonstrate that hierarchical clustering techniques can be useful for visualizing complex multivariate data and may provide new insights for the care of critically injured patients.
A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.
Shen, Lili; Guo, Jiming; Wang, Lei
2018-06-06
The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.
Forumclínic: the shaping of virtual communities to assist patients with chronic diseases.
Grau, Immaculada; Grajales Iii, Francisco J; Gene-Badia, Joan; Siso, Antoni; de Semir, Marc
2013-01-01
Information and communication technologies (ICTs) provide new opportunities to complement traditional care while enhancing patient autonomy. With the objective to supplement patient care, a group of health professionals at the Hospital Clínic de Barcelona created Forumclínic, an online networking website in Spanish and Catalan. In 2008, seven web- and DVD-based chronic disease portals (Diabetes, Schizophrenia, Cardiac Ischemia, COPD, Depression, Breast Cancer and cardiovascular risk) were created with the following resources: multimedia patient education material; physician-specialist transcribed research (articles) news; an open question forum (for clinician-user and user-to-user interaction); and patient and specialist interview videos on the progress of disease, common diagnosis and treatment procedures; and information on the best or worst prognoses. Using data from Google Analytics, server logs were used to observe online behaviour patterns and user postings. This data combined with a mixed methods approach were used to evaluate the development of a virtual community (VC). A virtual community was developed when the number of forum visits was greater than those in the disease portal (definition). While nearly half of the visitors were from the Americas, the Schizophrenia, Breast Cancer, Depression and COPD forums met the criteria for and developed a virtual community. However, the Diabetes and Cardiac Ischemia forums did not reach VC status. It is also interesting to note that users in their late thirties and early forties were primarily women. The development of four virtual communities in Forumclínic seems to support the self-care needs of virtual patients. Users also reported appreciating the increased interaction with experts online and commonly collaborated with the forum moderator to guide and support other users with similar conditions in managing their health. Thence, we believe that Forumclínic is a good model to complement traditional patient care. A formal evaluation of this adjuvant form of care, from both the users' and moderators' perspective, is currently in its final stages.
Spatial cluster detection for repeatedly measured outcomes while accounting for residential history.
Cook, Andrea J; Gold, Diane R; Li, Yi
2009-10-01
Spatial cluster detection has become an important methodology in quantifying the effect of hazardous exposures. Previous methods have focused on cross-sectional outcomes that are binary or continuous. There are virtually no spatial cluster detection methods proposed for longitudinal outcomes. This paper proposes a new spatial cluster detection method for repeated outcomes using cumulative geographic residuals. A major advantage of this method is its ability to readily incorporate information on study participants relocation, which most cluster detection statistics cannot. Application of these methods will be illustrated by the Home Allergens and Asthma prospective cohort study analyzing the relationship between environmental exposures and repeated measured outcome, occurrence of wheeze in the last 6 months, while taking into account mobile locations.
Muñoz-Ramírez, Zilia Y.; Mendez-Tenorio, Alfonso; Kato, Ikuko; Bravo, Maria M.; Rizzato, Cosmeri; Thorell, Kaisa; Torres, Roberto; Aviles-Jimenez, Francisco; Camorlinga, Margarita; Canzian, Federico; Torres, Javier
2017-01-01
Helicobacter pylori (HP) genetics may determine its clinical outcomes. Despite high prevalence of HP infection in Latin America (LA), there have been no phylogenetic studies in the region. We aimed to understand the structure of HP populations in LA mestizo individuals, where gastric cancer incidence remains high. The genome of 107 HP strains from Mexico, Nicaragua and Colombia were analyzed with 59 publicly available worldwide genomes. To study bacterial relationship on whole genome level we propose a virtual hybridization technique using thousands of high-entropy 13 bp DNA probes to generate fingerprints. Phylogenetic virtual genome fingerprint (VGF) was compared with Multi Locus Sequence Analysis (MLST) and with phylogenetic analyses of cagPAI virulence island sequences. With MLST some Nicaraguan and Mexican strains clustered close to Africa isolates, whereas European isolates were spread without clustering and intermingled with LA isolates. VGF analysis resulted in increased resolution of populations, separating European from LA strains. Furthermore, clusters with exclusively Colombian, Mexican, or Nicaraguan strains were observed, where the Colombian cluster separated from Europe, Asia, and Africa, while Nicaraguan and Mexican clades grouped close to Africa. In addition, a mixed large LA cluster including Mexican, Colombian, Nicaraguan, Peruvian, and Salvadorian strains was observed; all LA clusters separated from the Amerind clade. With cagPAI sequence analyses LA clades clearly separated from Europe, Asia and Amerind, and Colombian strains formed a single cluster. A NeighborNet analyses suggested frequent and recent recombination events particularly among LA strains. Results suggests that in the new world, H. pylori has evolved to fit mestizo LA populations, already 500 years after the Spanish colonization. This co-adaption may account for regional variability in gastric cancer risk. PMID:28293542
Exploring Windows Domain-Level Defenses Against Authentication Attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Jeff A.; Curtis, Laura
2016-01-01
We investigated the security resilience of the current Windows Active Directory (AD) environments to Pass-the-Hash and Pass- the-Ticket credential theft attacks. While doing this, we discovered a way to trigger the removal of all previously issued authentication credentials for a client, thus preventing their use by attackers. After triggered, the user is forced to contact the domain administrators and to authenticate to the AD to continue. This could become the basis for a response that arrests the spread of a detected attack. Operating in a virtualized XenServer environment, we were able to carefully determine and recreate the conditions necessary tomore » cause this response.« less
Access Control of Web and Java Based Applications
NASA Technical Reports Server (NTRS)
Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan
2011-01-01
Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.
A Scalable Monitoring for the CMS Filter Farm Based on Elasticsearch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J.M.; et al.
2015-12-23
A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning data flow and execution of the HLT are generated locally in the form of small documents using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along with process and system log information. Elasticsearch is a search server based on Apache Lucene. It provides a distributed, multitenant-capable search and aggregation engine. Since es is schema-free, any new information can be added seamlessly and the unstructured informationmore » can be queried in non-predetermined ways. The leaf es clusters consist of the very same nodes that form the Filter Farm thus providing natural horizontal scaling. A separate central” es cluster is used to collect and index aggregated information. The fine-grained information, all the way to individual processes, remains available in the leaf clusters. The central es cluster provides quasi-real-time high-level monitoring information to any kind of client. Historical data can be retrieved to analyse past problems or correlate them with external information. We discuss the design and performance of this system in the context of the CMS DAQ commissioning for LHC Run 2.« less
CTserver: A Computational Thermodynamics Server for the Geoscience Community
NASA Astrophysics Data System (ADS)
Kress, V. C.; Ghiorso, M. S.
2006-12-01
The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed architecture involves CFD computation of magma convection at Volcan Villarrica with magma properties and phase proportions calculated at each spatial node and at each time step via distributed function calls to MELTS-objects executing on the CTserver. Documentation and programming examples are provided at http://ctserver.ofm- research.org.
List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor
NASA Astrophysics Data System (ADS)
Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.
2014-03-01
List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.
Risk Assessment of the Naval Postgraduate School Gigabit Network
2004-09-01
Management Server (1) • Ras Server (1) • Remedy Server (1) • Samba Server(2) • SQL Servers (3) • Web Servers (3) • WINS Server (1) • Library...Server Bob Sharp INCA Windows 2000 Advanced Server NPGS Landesk SQL 2000 Alan Pires eagle Microsoft Windows 2000 Advanced Server EWS NPGS Landesk...Advanced Server Special Projects NPGS SQL Alan Pires MC01BDB Microsoft Windows 2000 Advanced Server Special Projects NPGS SQL 2000 Alan Pires
State of the art of teledermatopathology.
Massone, Cesare; Brunasso, Alexandra M G; Campbell, Terri M; Soyer, H Peter
2008-10-01
Teledermatopathology may involve real-time transmission of images from distant locations to consulting pathologists by the remote manipulation of a robotic microscope. Alternatively, the static store-and-forward option involves the single-file transmission of subjectively preselected and captured areas of microscopic images by a referring physician. The recent introduction of virtual slide systems (VSS) involves the digitization of whole slides at high resolution thus enabling the user to view any part of the specimen at any magnification. Such technology has surmounted previous restrictions caused by the size of preselected areas and specimen sampling for telepathology. In terms of client access, these VSS may be stored on a virtual slide server, made available on the Web for remote consultation by pathologists via an integrated virtual slide client network. Despite store-and-forward teledermatopathology being the most frequently used and less expensive approach to teledermatopathology, VSS represents the future in this discipline. The recent pilot studies suggest that the use of remote expert consultants in diagnostic dermatopathology can be integrated into daily routine, teleconsultation, and teleteaching. The new technology enables rapid and reproducible diagnoses, but despite its usability, VSS is not completely feasible for teledermatopathology of inflammatory skin diseases as the performance seems to be influenced by the availability of complete clinical data. Improvements in the diagnostic facility will no doubt follow from further development of the VSS, the slide processor, and of course training in the use of virtual microscope. Undoubtedly, as technology becomes even more sophisticated in the future, VSS will overcome the present drawbacks and find its place in all facets of teledermatopathology.
2015-01-01
Web-based user interfaces to scientific applications are important tools that allow researchers to utilize a broad range of software packages with just an Internet connection and a browser.1 One such interface, CHARMMing (CHARMM interface and graphics), facilitates access to the powerful and widely used molecular software package CHARMM. CHARMMing incorporates tasks such as molecular structure analysis, dynamics, multiscale modeling, and other techniques commonly used by computational life scientists. We have extended CHARMMing’s capabilities to include a fragment-based docking protocol that allows users to perform molecular docking and virtual screening calculations either directly via the CHARMMing Web server or on computing resources using the self-contained job scripts generated via the Web interface. The docking protocol was evaluated by performing a series of “re-dockings” with direct comparison to top commercial docking software. Results of this evaluation showed that CHARMMing’s docking implementation is comparable to many widely used software packages and validates the use of the new CHARMM generalized force field for docking and virtual screening. PMID:25151852
Tavakoli, Yasaman; Esmaeili, Abolghasem; Saber, Hossein
2016-10-01
Glutamate decarboxylase (GAD) is an enzyme that converts l-glutamate to gamma amino butyric acid (GABA) that is a widely used drug to treat mental disorders like Alzheimer's disease. In this study for the first time point mutation was performed virtually in the active site of the E. coli GAD in order to increase thermal stability and catalytic activity of the enzyme. Energy minimization and addition of water box were performed using GROMACS 5.4.6 package. PoPMuSiC 2.1 web server was used to predict potential spots for point mutation and Modeller software was used to perform point mutation on three dimensional model. Molegro virtual docker software was used for cavity detection and stimulated docking study. Results indicate that performing mutation separately at positions 164, 302, 304, 393, 396, 398 and 410 increase binding affinity to substrate. The enzyme is predicted to be more thermo- stable in all 7 mutants based on ΔΔG value. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pevzner, Yuri; Frugier, Emilie; Schalk, Vinushka; Caflisch, Amedeo; Woodcock, H Lee
2014-09-22
Web-based user interfaces to scientific applications are important tools that allow researchers to utilize a broad range of software packages with just an Internet connection and a browser. One such interface, CHARMMing (CHARMM interface and graphics), facilitates access to the powerful and widely used molecular software package CHARMM. CHARMMing incorporates tasks such as molecular structure analysis, dynamics, multiscale modeling, and other techniques commonly used by computational life scientists. We have extended CHARMMing's capabilities to include a fragment-based docking protocol that allows users to perform molecular docking and virtual screening calculations either directly via the CHARMMing Web server or on computing resources using the self-contained job scripts generated via the Web interface. The docking protocol was evaluated by performing a series of "re-dockings" with direct comparison to top commercial docking software. Results of this evaluation showed that CHARMMing's docking implementation is comparable to many widely used software packages and validates the use of the new CHARMM generalized force field for docking and virtual screening.
Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds
Kinger, Supriya; Kumar, Rajesh; Sharma, Anju
2014-01-01
Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962
User-driven Cloud Implementation of environmental models and data for all
NASA Astrophysics Data System (ADS)
Gurney, R. J.; Percy, B. J.; Elkhatib, Y.; Blair, G. S.
2014-12-01
Environmental data and models come from disparate sources over a variety of geographical and temporal scales with different resolutions and data standards, often including terabytes of data and model simulations. Unfortunately, these data and models tend to remain solely within the custody of the private and public organisations which create the data, and the scientists who build models and generate results. Although many models and datasets are theoretically available to others, the lack of ease of access tends to keep them out of reach of many. We have developed an intuitive web-based tool that utilises environmental models and datasets located in a cloud to produce results that are appropriate to the user. Storyboards showing the interfaces and visualisations have been created for each of several exemplars. A library of virtual machine images has been prepared to serve these exemplars. Each virtual machine image has been tailored to run computer models appropriate to the end user. Two approaches have been used; first as RESTful web services conforming to the Open Geospatial Consortium (OGC) Web Processing Service (WPS) interface standard using the Python-based PyWPS; second, a MySQL database interrogated using PHP code. In all cases, the web client sends the server an HTTP GET request to execute the process with a number of parameter values and, once execution terminates, an XML or JSON response is sent back and parsed at the client side to extract the results. All web services are stateless, i.e. application state is not maintained by the server, reducing its operational overheads and simplifying infrastructure management tasks such as load balancing and failure recovery. A hybrid cloud solution has been used with models and data sited on both private and public clouds. The storyboards have been transformed into intuitive web interfaces at the client side using HTML, CSS and JavaScript, utilising plug-ins such as jQuery and Flot (for graphics), and Google Maps APIs. We have demonstrated that a cloud infrastructure can be used to assemble a virtual research environment that, coupled with a user-driven development approach, is able to cater to the needs of a wide range of user groups, from domain experts to concerned members of the general public.