Science.gov

Sample records for access server las

  1. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google

  2. Video 2 of 4: Navigating the Live Access Server

    NASA Video Gallery

    Learn how to navigate the MY NASA DATA website and server using the NASA Explorer Schools lesson, Analyzing Solar Energy Graphs. The video also shows you how to access, filter and manipulate the da...

  3. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    SciTech Connect

    Valassi, A.; Bartoldus, R.; Kalkhof, A.; Salnikov, A.; Wache, M.; /Mainz U., Inst. Phys.

    2012-04-19

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farm of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.

  4. Serving and Rendering Cluster-Based Ocean Model Output on a Geowall Using the Live Access Server

    NASA Astrophysics Data System (ADS)

    Moore, C. W.; Hermann, A. J.; Dobbins, E. L.

    2004-12-01

    Scientists at NOAA's Pacific Marine Environmental Laboratory are relying more and more on supercomputing platforms for their modeling efforts. Running ocean models on these large cluster machines poses problems in that domain sizes are increasing and tracking how the model dynamics are developing during a run requires high-bandwidth network time. In an effort to streamline this procedure both server and 3-D rending technology are utilized. Intermediate model results saved in netCDF file format can be served remotely to query model progress using the Live Access Server (LAS). In our implementation, a crontab script checks for model results and generates an XML data-file descriptor and adds the data set to the list of those available for LAS to serve up. On top of the default product choices (2-D plots, data listings, etc), the user can also chose one of two 3D file formats: either a VRML or a Vis5D file of the variable of interest. The LAS is built upon the Ferret data analysis package with the ability to re-grid variables defined on curvilinear coordinate grids and to serve up Vis5D files. An alternate back-end, written using the open-source Visualization Toolkit (VTK), can serve a VRML isosurface as well as current vector fields, keeping bandwidth low by utilizing topology-preserving polygon mesh decimation algorithms. Files served through our LAS system can be projected in passive stereo using a Geowall (www.geowall.org) by either Vis5D, or by ImmersaView. While ImmersaView offers the ability to animate through the VRML isosurfaces in collaboration with a remote researcher, Vis5D (an older-technology application) gives the user the ability to explore the data more thoroughly by allowing the scientist to change isosurfaces levels, or to probe the data using contour or vector slices. We will explore the possibility of using LAS as the server for the parallel, composite-rendering application ParaView.

  5. Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

    SciTech Connect

    Becla, Jacek

    2002-05-01

    The BaBar Experiment collected around 20 TB of data during its first 6 months of running. Now, after 18 months, data size exceeds 300 TB, and according to prognosis, it is a small fraction of the size of data coming in the next few months. In order to keep up with the data, significant effort was put into tuning the database system. It led to great performance improvements, as well as to inevitable system expansion--450 simultaneous processing nodes alone used for data reconstruction. It is believed, that further growth beyond 600 nodes will happen soon. In such an environment, many complex operations are executed simultaneously on hundreds of machines, putting a huge load on data servers and increasing network traffic. Introducing two CORBA servers halved startup time, and dramatically offloaded database servers: data servers as well as lock servers. The paper describes details of design and implementation of two servers recently introduced in the BaBar system: Conditions OID Server and Clustering Server. The first experience of using these servers is discussed. A discussion on a Collection Server for data analysis, currently being designed is included.

  6. A novel user authentication and key agreement protocol for accessing multi-medical server usable in TMIS.

    PubMed

    Amin, Ruhul; Biswas, G P

    2015-03-01

    Telecare Medical Information System (TMIS) makes an efficient and convenient connection between patient(s)/user(s) at home and doctor(s) at a clinical center. To ensure secure connection between the two entities (patient(s)/user(s), doctor(s)), user authentication is enormously important for the medical server. In this regard, many authentication protocols have been proposed in the literature only for accessing single medical server. In order to fix the drawbacks of the single medical server, we have primarily developed a novel architecture for accessing several medical services of the multi-medical server, where a user can directly communicate with the doctor of the medical server securely. Thereafter, we have developed a smart card based user authentication and key agreement security protocol usable for TMIS system using cryptographic one-way hash function. We have analyzed the security of our proposed authentication scheme through both formal and informal security analysis. Furthermore, we have simulated the proposed scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and showed that the scheme is secure against the replay and man-in-the-middle attacks. The informal security analysis is also presented which confirms that the protocol has well security protection on the relevant security attacks. The security and performance comparison analysis confirm that the proposed protocol not only provides security protection on the above mentioned attacks, but it also achieves better complexities along with efficient login and password change phase. PMID:25681100

  7. MO/DSD online information server and global information repository access

    NASA Technical Reports Server (NTRS)

    Nguyen, Diem; Ghaffarian, Kam; Hogie, Keith; Mackey, William

    1994-01-01

    Often in the past, standards and new technology information have been available only in hardcopy form, with reproduction and mailing costs proving rather significant. In light of NASA's current budget constraints and in the interest of efficient communications, the Mission Operations and Data Systems Directorate (MO&DSD) New Technology and Data Standards Office recognizes the need for an online information server (OLIS). This server would allow: (1) dissemination of standards and new technology information throughout the Directorate more quickly and economically; (2) online browsing and retrieval of documents that have been published for and by MO&DSD; and (3) searching for current and past study activities on related topics within NASA before issuing a task. This paper explores a variety of available information servers and searching tools, their current capabilities and limitations, and the application of these tools to MO&DSD. Most importantly, the discussion focuses on the way this concept could be easily applied toward improving dissemination of standards and new technologies and improving documentation processes.

  8. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1997-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  9. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1997-12-09

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  10. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1996-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  11. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1999-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  12. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  13. Real-Time Access to Altimetry and Operational Oceanography Products via OPeNDAP/LAS Technologies : the Example of Aviso, Mercator and Mersea Projects

    NASA Astrophysics Data System (ADS)

    Baudel, S.; Blanc, F.; Jolibois, T.; Rosmorduc, V.

    2004-12-01

    The Products and Services (P&S) department in the Space Oceanography Division at CLS is in charge of diffusing and promoting altimetry and operational oceanography data. P&S is so involved in Aviso satellite altimetry project, in Mercator ocean operational forecasting system, and in the European Godae /Mersea ocean portal. Aiming to a standardisation and a common vision and management of all these ocean data, these projects led to the implementation of several OPeNDAP/LAS Internet servers. OPeNDAP allows the user to extract via a client software (like IDL, Matlab or Ferret) the data he is interested in and only this data, avoiding him to download full information files. OPeNDAP allows to extract a geographic area, a period time, an oceanic variable, and an output format. LAS is an OPeNDAP data access web server whose special feature consists in the facility for unify in a single vision the access to multiple types of data from distributed data sources. The LAS can make requests to different remote OPeNDAP servers. This enables to make comparisons or statistics upon several different data types. Aviso is the CNES/CLS service which distributes altimetry products since 1993. The Aviso LAS distributes several Ssalto/Duacs altimetry products such as delayed and near-real time mean sea level anomaly, absolute dynamic topography, absolute geostrophic velocities, gridded significant wave height and gridded wind speed modulus. Mercator-Ocean is a French operational oceanography centre which distributes its products by several means among them LAS/OPeNDAP servers as part of Mercator Mersea-strand1 contribution. 3D ocean description (temperature, salinity, current and other oceanic variables) of the North Atlantic and Mediterranean are real-time available and weekly updated. LAS special feature consisting in the possibility of making requests to several remote data centres with same OPeNDAP configurations particularly fitted to Mersea strand-1 problematics. This European

  14. Real-Time Access to Meteosat Data Using the ADDE Server Technology

    NASA Astrophysics Data System (ADS)

    Koenig, M.; Gaertner, V. K.

    2006-05-01

    The McIDAS ADDE technology is used by EUMETSAT to provide access to real-time Meteosat-8 image data to globally foster training activities within and outside classroom courses. (McIDAS - Man computer Interactive Data Access System, ADDE - Abstract Data Distribution Environment). The advanced imaging capabilities of Meteosat-8 - a satellite of the Meteosat Second Generation series - provides full disk Earth coverage in 11 spectral channels every 15 minutes. A further 12th channel covers the land surfaces in a 1 km spatial resolution in a solar wavelength. Real-time operational services use the EUMETCast dissemination mechanism for timely access to the image data. EUMETCast covers the geographic area of Europe, Africa, South America and parts of North America and Asia. Details of the EUMETCast system are given in a separate presentation by Gaertner and Koenig in this conference. In addition to EUMETCast, however, for training purposes, access is also made available in near real-time on the basis of the ADDE technology. This is an internet based data access, i.e. it is globally available. ADDE offers the possibility to retrieve only the area of interest, e.g. a special geographic area and only selected channels. This implies that the actual data transfer is small so that the internet is used very efficiently. ADDE was developed as part of the McIDAS software, and is now also freely available in the OpenADDE package (http://www.ssec.wisc.edu/mcidas/software/openadde). Other than McIDAS itself, there is a variety of application packages that are ADDE enabled, as e.g. McIDAS-Lite, the Unidata Integrated Data Viewer, Hydra, IDL, or Matlab. These tools also offer further analysis concepts. Examples will be shown during the presentation. The user community of the ADDE access also needs to be licensed according to the EUMETSAT data policy. After the successful commissioning of Meteosat-9, the data of this satellite will of course be incorporated into the ADDE data provision.

  15. The Common Gateway Interface (CGI) for Enhancing Access to Database Servers via the World Wide Web (WWW).

    ERIC Educational Resources Information Center

    Machovec, George S., Ed.

    1995-01-01

    Explains the Common Gateway Interface (CGI) protocol as a set of rules for passing information from a Web server to an external program such as a database search engine. Topics include advantages over traditional client/server solutions, limitations, sample library applications, and sources of information from the Internet. (LRW)

  16. THttpServer class in ROOT

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, Joern; Linev, Sergey

    2015-12-01

    The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.

  17. Frame architecture for video servers

    NASA Astrophysics Data System (ADS)

    Venkatramani, Chitra; Kienzle, Martin G.

    1999-11-01

    Video is inherently frame-oriented and most applications such as commercial video processing require to manipulate video in terms of frames. However, typical video servers treat videos as byte streams and perform random access based on approximate byte offsets to be supplied by the client. They do not provide frame or timecode oriented API which is essential for many applications. This paper describes a frame-oriented architecture for video servers. It also describes the implementation in the context of IBM's VideoCharger server. The later part of the paper describes an application that uses the frame architecture and provides fast and slow-motion scanning capabilities to the server.

  18. MAVID multiple alignment server.

    PubMed

    Bray, Nicolas; Pachter, Lior

    2003-07-01

    MAVID is a multiple alignment program suitable for many large genomic regions. The MAVID web server allows biomedical researchers to quickly obtain multiple alignments for genomic sequences and to subsequently analyse the alignments for conserved regions. MAVID has been successfully used for the alignment of closely related species such as primates and also for the alignment of more distant organisms such as human and fugu. The server is fast, capable of aligning hundreds of kilobases in less than a minute. The multiple alignment is used to build a phylogenetic tree for the sequences, which is subsequently used as a basis for identifying conserved regions in the alignment. The server can be accessed at http://baboon.math.berkeley.edu/mavid/.

  19. BioExtract Server - An integrated workflow-enabling system to access and analyze heterogeneous, distributed biomolecular data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Many computational workflows in bioinformatics require access to multiple, distributed data sources and analytic tools. The requisite data sources may include large public data repositories, community databases, and project databases for use in domain-specific research. Because different data source...

  20. Bringing it All Together: NODC's Geoportal Server as an Integration Tool for Interoperable Data Services

    NASA Astrophysics Data System (ADS)

    Casey, K. S.; Li, Y.

    2011-12-01

    The US National Oceanographic Data Center (NODC) has implemented numerous interoperable data technologies in recent years to enhance the discovery, understanding, and use of the vast quantities of data in the NODC archives. These services include OPeNDAP's Hyrax server, Unidata's THREDDS Data Server (TDS), NOAA's Live Access Server (LAS), and most recently the ESRI ArcGIS Server. Combined, these technologies enable NODC to provide access to its data holdings and products through most of the commonly-used standardized web services like the Data Access Protocol (DAP) and the Open Geospatial Consortium suite of services such as the Web Mapping Service (WMS) and Web Coverage Service (WCS). Despite the strong demand for and use of these services, the acronym-rich environment of services can also result in confusion for producers of data to the NODC archives, for consumers of data from the NODC archives, and for the data stewards at the archives as well. The situation is further complicated by the fact that NODC also maintains some ad hoc services like WODselect, and that not all services can be applied to all of the tens of thousands of collections in the NODC archive; where once every data set was available only through FTP and HTTP servers, now many are also available from the LAS, TDS, Hyrax, and ArcGIS Server. To bring order and clarity to this potentially confusing collection of services, NODC deployed the Geoportal Server into its Archive Management System as an integrating technology that brings together its various data access, visualization, and discovery services as well as its overall metadata management workflows. While providing an enhanced web-based interface for more integrated human-to-machine discovery and access, the deployment also enables NODC for the first time to support a robust set of machine-to-machine discovery services such as the Catalog Service for the Web (CS/W), OpenSearch, and Search and Retrieval via URL (SRU) . This approach allows NODC

  1. Secure IRC Server

    2003-08-25

    The IRCD is an IRC server that was originally distributed by the IRCD Hybrid developer team for use as a server in IRC message over the public Internet. By supporting the IRC protocol defined in the IRC RFC, IRCD allows the users to create and join channels for group or one-to-one text-based instant messaging. It stores information about channels (e.g., whether it is public, secret, or invite-only, the topic set, membership) and users (who ismore » online and what channels they are members of). It receives messages for a specific user or channel and forwards these messages to the targeted destination. Since server-to-server communication is also supported, these targeted destinations may be connected to different IRC servers. Messages are exchanged over TCP connections that remain open between the client and the server. The IRCD is being used within the Pervasive Computing Collaboration Environment (PCCE) as the 'chat server' for message exchange over public and private channels. After an LBNLSecureMessaging(PCCE chat) client has been authenticated, the client connects to IRCD with its assigned nickname or 'nick.' The client can then create or join channels for group discussions or one-to-one conversations. These channels can have an initial mode of public or invite-only and the mode may be changed after creation. If a channel is public, any one online can join the discussion; if a channel is invite-only, users can only join if existing members of the channel explicity invite them. Users can be invited to any type of channel and users may be members of multiple channels simultaneously. For use with the PCCE environment, the IRCD application (which was written in C) was ported to Linux and has been tested and installed under Linux Redhat 7.2. The source code was also modified with SSL so that all messages exchanged over the network are encrypted. This modified IRC server also verifies with an authentication server that the client is who he or she claims to be and

  2. WMS Server 2.0

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Wood, James F.

    2012-01-01

    This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.

  3. Volume server: A scalable high speed and high capacity magnetic tape archive architecture with concurrent multi-host access

    NASA Technical Reports Server (NTRS)

    Rybczynski, Fred

    1993-01-01

    A major challenge facing data processing centers today is data management. This includes the storage of large volumes of data and access to it. Current media storage for large data volumes is typically off line and frequently off site in warehouses. Access to data archived in this fashion can be subject to long delays, errors in media selection and retrieval, and even loss of data through misplacement or damage to the media. Similarly, designers responsible for architecting systems capable of continuous high-speed recording of large volumes of digital data are faced with the challenge of identifying technologies and configurations that meet their requirements. Past approaches have tended to evaluate the combination of the fastest tape recorders with the highest capacity tape media and then to compromise technology selection as a consequence of cost. This paper discusses an architecture that addresses both of these challenges and proposes a cost effective solution based on robots, high speed helical scan tape drives, and large-capacity media.

  4. Interfaces for Distributed Systems of Information Servers.

    ERIC Educational Resources Information Center

    Kahle, Brewster; And Others

    1992-01-01

    Describes two systems--Wide Area Information Servers (WAIS) and Rosebud--that provide protocol-based mechanisms for accessing remote full-text information servers. Design constraints, human interface design, and implementation are examined for five interfaces to these systems developed to run on the Macintosh or Unix terminals. Sample screen…

  5. Client/server study

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar; Marcus, Robert; Brewster, Stephen

    1995-01-01

    The goal of this project is to find cost-effective and efficient strategies/solutions to integrate existing databases, manage network, and improve productivity of users in a move towards client/server and Integrated Desktop Environment (IDE) at NASA LeRC. The project consisted of two tasks as follows: (1) Data collection, and (2) Database Development/Integration. Under task 1, survey questionnaires and a database were developed. Also, an investigation on commercially available tools for automated data-collection and net-management was performed. As requirements evolved, the main focus has been task 2 which involved the following subtasks: (1) Data gathering/analysis of database user requirements, (2) Database analysis and design, making recommendations for modification of existing data structures into relational database or proposing a common interface to access heterogeneous databases(INFOMAN system, CCNS equipment list, CCNS software list, USERMAN, and other databases), (3) Establishment of a client/server test bed at Central State University (CSU), (4) Investigation of multi-database integration technologies/ products for IDE at NASA LeRC, and (5) Development of prototypes using CASE tools (Object/View) for representative scenarios accessing multi-databases and tables in a client/server environment. Both CSU and NASA LeRC have benefited from this project. CSU team investigated and prototyped cost-effective/practical solutions to facilitate NASA LeRC move to a more productive environment. CSU students utilized new products and gained skills that could be a great resource for future needs of NASA.

  6. Efficient server selection system for widely distributed multiserver networks

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-pyo; Park, Sung-sik; Lee, Kyoon-Ha

    2001-07-01

    In order to providing more improved quality of Internet service, the access speed to a subscriber's network and a server which is the Internet access device was rapidly enhanced by traffic distribution and installation of high-performance server. But the Internet access quality and the content for a speed were remained out of satisfaction. With such a hazard, an extended node at Internet access device has a limitation for coping with growing network traffic, and the root cause is located in the Middle-mile node between a CP (Content Provider) server and a user node. For such a problem, this paper proposes a new method to select a effective server to a client as minimizing the number of node between the server and the client while keeping the load balance among servers which is clustered by the client's location on the physically distributed multi-site environments. The proposed method use a NSP (Network Status Prober) and a contents server manager so as to get a status of each servers and distributed network, a new architecture will be shown for the server selecting algorithm and the implementation for the algorithm. And also, this paper shows the parameters selecting a best service providing server for client and that the grantor will be confirmed by the experiment over the proposed architectures.

  7. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  8. Multiple-server Flexible Blind Quantum Computation in Networks

    NASA Astrophysics Data System (ADS)

    Kong, Xiaoqin; Li, Qin; Wu, Chunhui; Yu, Fang; He, Jinjun; Sun, Zhiyuan

    2016-06-01

    Blind quantum computation (BQC) can allow a client with limited quantum power to delegate his quantum computation to a powerful server and still keep his own data private. In this paper, we present a multiple-server flexible BQC protocol, where a client who only needs the ability of accessing qua ntum channels can delegate the computational task to a number of servers. Especially, the client's quantum computation also can be achieved even when one or more delegated quantum servers break down in networks. In other words, when connections to certain quantum servers are lost, clients can adjust flexibly and delegate their quantum computation to other servers. Obviously it is trivial that the computation will be unsuccessful if all servers are interrupted.

  9. Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…

  10. Design of a distributed CORBA based image processing server.

    PubMed

    Giess, C; Evers, H; Heid, V; Meinzer, H P

    2000-01-01

    This paper presents the design and implementation of a distributed image processing server based on CORBA. Existing image processing tools were encapsulated in a common way with this server. Data exchange and conversion is done automatically inside the server, hiding these tasks from the user. The different image processing tools are visible as one large collection of algorithms and due to the use of CORBA are accessible via intra-/internet.

  11. Recent improvements in the NASA technical report server

    NASA Technical Reports Server (NTRS)

    Maa, Ming-Hokng; Nelson, Michael L.

    1995-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web (WWW) report distribution service, has been modified to allow parallel database queries, significantly decreasing user access time by an average factor of 2.3, access from clients behind firewalls and/or proxies which truncate excessively long Uniform Resource Locators (URL's), access to non-Wide Area Information Server (WAIS) databases, and compatibility with the Z39-50.3 protocol.

  12. The NEOS server.

    SciTech Connect

    Czyzyk, J.; Mesnier, M. P.; More, J. J.; Mathematics and Computer Science

    1998-07-01

    The Network-Enabled Optimization System (NEOS) is an Internet based optimization service. The NEOS Server introduces a novel approach for solving optimization problems. Users of the NEOS Server submit a problem and their choice of optimization solver over the Internet. The NEOS Server computes all information (for example, derivatives and sparsity patterns) required by the solver, links the optimization problem with the solver, and returns a solution.

  13. Servers Made to Order

    SciTech Connect

    Anderson, Daryl L.

    2007-11-01

    Virtualization is a hot buzzword right now, and it’s no wonder federal agencies are coming around to the idea of consolidating their servers and storage. Traditional servers do nothing for about 80% of their lifecycle, yet use nearly half their peak energy consumption which wastes capacity and power. Server virtualization creates logical "machines" on a single physical server. At the Pacific Northwest National Laboratory in Richland, Washington, using virtualization technology is proving to be a cost-effective way to make better use of current server hardware resources while reducing hardware lifecycle costs and cooling demands, and saving precious data center space. And as an added bonus, virtualization also ties in with the Laboratory’s mission to be responsible stewards of the environment as well as the Department of Energy’s assets. This article explains why even the smallest IT shops can benefit from the Laboratory’s best practices.

  14. PEM public key certificate cache server

    NASA Astrophysics Data System (ADS)

    Cheung, T.

    1993-12-01

    Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.

  15. Interfaces for Distributed Systems of Information Servers.

    ERIC Educational Resources Information Center

    Kahle, Brewster M.; And Others

    1993-01-01

    Describes five interfaces to remote, full-text databases accessed through distributed systems of servers. These are WAIStation for the Macintosh, XWAIS for X-Windows, GWAIS for Gnu-Emacs; SWAIS for dumb terminals, and Rosebud for the Macintosh. Sixteen illustrations provide examples of display screens. Problems and needed improvements are…

  16. A Server-Based Mobile Coaching System

    PubMed Central

    Baca, Arnold; Kornfeind, Philipp; Preuschl, Emanuel; Bichler, Sebastian; Tampier, Martin; Novatchkov, Hristo

    2010-01-01

    A prototype system for monitoring, transmitting and processing performance data in sports for the purpose of providing feedback has been developed. During training, athletes are equipped with a mobile device and wireless sensors using the ANT protocol in order to acquire biomechanical, physiological and other sports specific parameters. The measured data is buffered locally and forwarded via the Internet to a server. The server provides experts (coaches, biomechanists, sports medicine specialists etc.) with remote data access, analysis and (partly automated) feedback routines. In this way, experts are able to analyze the athlete’s performance and return individual feedback messages from remote locations. PMID:22163490

  17. The NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Paulson, Sharon S.; Binkley, Robert L.; Kellogg, Yvonne D.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to "provide for the widest practicable and appropriate dissemination of information concerning its activities and the results thereof." The search for innovative methods to distribute NASA's information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the service. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained ensures that NASA's institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  18. Remote diagnosis server

    NASA Technical Reports Server (NTRS)

    Deb, Somnath (Inventor); Ghoshal, Sudipto (Inventor); Malepati, Venkata N. (Inventor); Kleinman, David L. (Inventor); Cavanaugh, Kevin F. (Inventor)

    2004-01-01

    A network-based diagnosis server for monitoring and diagnosing a system, the server being remote from the system it is observing, comprises a sensor for generating signals indicative of a characteristic of a component of the system, a network-interfaced sensor agent coupled to the sensor for receiving signals therefrom, a broker module coupled to the network for sending signals to and receiving signals from the sensor agent, a handler application connected to the broker module for transmitting signals to and receiving signals therefrom, a reasoner application in communication with the handler application for processing, and responding to signals received from the handler application, wherein the sensor agent, broker module, handler application, and reasoner applications operate simultaneously relative to each other, such that the present invention diagnosis server performs continuous monitoring and diagnosing of said components of the system in real time. The diagnosis server is readily adaptable to various different systems.

  19. User-Friendly Data Servers for Climate Studies at the Asia-Pacific Data-Research Center (APDRC)

    NASA Astrophysics Data System (ADS)

    Yuan, G.; Shen, Y.; Zhang, Y.; Merrill, R.; Waseda, T.; Mitsudera, H.; Hacker, P.

    2002-12-01

    The APDRC was recently established within the International Pacific Research Center (IPRC) at the University of Hawaii. The APDRC mission is to increase understanding of climate variability in the Asia-Pacific region by developing the computational, data-management, and networking infrastructure necessary to make data resources readily accessible and usable by researchers, and by undertaking data-intensive research activities that will both advance knowledge and lead to improvements in data preparation and data products. A focus of recent activity is the implementation of user-friendly data servers. The APDRC is currently running a Live Access Server (LAS) developed at NOAA/PMEL to provide access to and visualization of gridded climate products via the web. The LAS also allows users to download the selected data subsets in various formats (such as binary, netCDF and ASCII). Most of the datasets served by the LAS are also served through our OPeNDAP server (formerly DODS), which allows users to directly access the data using their desktop client tools (e.g. GrADS, Matlab and Ferret). In addition, the APDRC is running an OPeNDAP Catalog/Aggregation Server (CAS) developed by Unidata at UCAR to serve climate data and products such as model output and satellite-derived products. These products are often large (> 2 GB) and are therefore stored as multiple files (stored separately in time or in parameters). The CAS remedies the inconvenience of multiple files and allows access to the whole dataset (or any subset that cuts across the multiple files) via a single request command from any DODS enabled client software. Once the aggregation of files is configured at the server (CAS), the process of aggregation is transparent to the user. The user only needs to know a single URL for the entire dataset, which is, in fact, stored as multiple files. CAS even allows aggregation of files on different systems and at different locations. Currently, the APDRC is serving NCEP, ECMWF

  20. HDF-EOS Web Server

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  1. Remote Patron Validation: Posting a Proxy Server at the Digital Doorway.

    ERIC Educational Resources Information Center

    Webster, Peter

    2002-01-01

    Discussion of remote access to library services focuses on proxy servers as a method for remote access, based on experiences at Saint Mary's University (Halifax). Topics include Internet protocol user validation; browser-directed proxies; server software proxies; vendor alternatives for validating remote users; and Internet security issues. (LRW)

  2. Data Access Tools And Services At The Goddard Distributed Active Archive Center (GDAAC)

    NASA Technical Reports Server (NTRS)

    Pham, Long; Eng, Eunice; Sweatman, Paul

    2003-01-01

    As one of the largest providers of Earth Science data from the Earth Observing System, GDAAC provides the latest data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Atmospheric Infrared Sounder (AIRS), Solar Radiation and Climate Experiment (SORCE) data products via GDAAC's data pool (50TB of disk cache). In order to make this huge volume of data more accessible to the public and science communities, the GDAAC offers multiple data access tools and services: Open Source Project for Network Data Access Protocol (OPeNDAP), Grid Analysis and Display System (GrADS/DODS) (GDS), Live Access Server (LAS), OpenGlS Web Map Server (WMS) and Near Archive Data Mining (NADM). The objective is to assist users in retrieving electronically a smaller, usable portion of data for further analysis. The OPeNDAP server, formerly known as the Distributed Oceanographic Data System (DODS), allows the user to retrieve data without worrying about the data format. OPeNDAP is capable of server-side subsetting of HDF, HDF-EOS, netCDF, JGOFS, ASCII, DSP, FITS and binary data formats. The GrADS/DODS server is capable of serving the same data formats as OPeNDAP. GDS has an additional feature of server-side analysis. Users can analyze the data on the server there by decreasing the computational load on their client's system. The LAS is a flexible server that allows user to graphically visualize data on the fly, to request different file formats and to compare variables from distributed locations. Users of LAS have options to use other available graphics viewers such as IDL, Matlab or GrADS. WMS is based on the OPeNDAP for serving geospatial information. WMS supports OpenGlS protocol to provide data in GIs-friendly formats for analysis and visualization. NADM is another access to the GDAAC's data pool. NADM gives users the capability to use a browser to upload their C, FORTRAN or IDL algorithms, test the algorithms, and mine data in the data pool. With NADM, the GDAAC provides an

  3. Dali server update

    PubMed Central

    Holm, Liisa; Laakso, Laura M.

    2016-01-01

    The Dali server (http://ekhidna2.biocenter.helsinki.fi/dali) is a network service for comparing protein structures in 3D. In favourable cases, comparing 3D structures may reveal biologically interesting similarities that are not detectable by comparing sequences. The Dali server has been running in various places for over 20 years and is used routinely by crystallographers on newly solved structures. The latest update of the server provides enhanced analytics for the study of sequence and structure conservation. The server performs three types of structure comparisons: (i) Protein Data Bank (PDB) search compares one query structure against those in the PDB and returns a list of similar structures; (ii) pairwise comparison compares one query structure against a list of structures specified by the user; and (iii) all against all structure comparison returns a structural similarity matrix, a dendrogram and a multidimensional scaling projection of a set of structures specified by the user. Structural superimpositions are visualized using the Java-free WebGL viewer PV. The structural alignment view is enhanced by sequence similarity searches against Uniprot. The combined structure-sequence alignment information is compressed to a stack of aligned sequence logos. In the stack, each structure is structurally aligned to the query protein and represented by a sequence logo. PMID:27131377

  4. Dali server update.

    PubMed

    Holm, Liisa; Laakso, Laura M

    2016-07-01

    The Dali server (http://ekhidna2.biocenter.helsinki.fi/dali) is a network service for comparing protein structures in 3D. In favourable cases, comparing 3D structures may reveal biologically interesting similarities that are not detectable by comparing sequences. The Dali server has been running in various places for over 20 years and is used routinely by crystallographers on newly solved structures. The latest update of the server provides enhanced analytics for the study of sequence and structure conservation. The server performs three types of structure comparisons: (i) Protein Data Bank (PDB) search compares one query structure against those in the PDB and returns a list of similar structures; (ii) pairwise comparison compares one query structure against a list of structures specified by the user; and (iii) all against all structure comparison returns a structural similarity matrix, a dendrogram and a multidimensional scaling projection of a set of structures specified by the user. Structural superimpositions are visualized using the Java-free WebGL viewer PV. The structural alignment view is enhanced by sequence similarity searches against Uniprot. The combined structure-sequence alignment information is compressed to a stack of aligned sequence logos. In the stack, each structure is structurally aligned to the query protein and represented by a sequence logo.

  5. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    NASA Astrophysics Data System (ADS)

    Stepanov, Sergey

    2013-03-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  6. Managing heterogeneous wireless environments via Hotspot servers

    NASA Astrophysics Data System (ADS)

    Simunic, Tajana; Qadeer, Wajahat; De Micheli, Giovanni

    2005-01-01

    Wireless communication today supports heterogeneous wireless devices with a number of different wireless network interfaces (WNICs). A large fraction of communication is infrastructure based, so the wireless access points and hotspot servers have become more ubiquitous. Battery lifetime is still a critical issue, with WNICs typically consuming a large fraction of the overall power budget in a mobile device. In this work we present a new technique for managing power consumption and QoS in diverse wireless environments using Hotspot servers. We introduce a resource manager module at both Hotspot server and the client. Resource manager schedules communication bursts between it and each client. The schedulers decide what WNIC to employ for communication, when to communicate data and how to minimize power dissipation while maintaining an acceptable QoS based on the application needs. We present two new scheduling policies derived from well known earliest deadline first (EDF) and rate monotonic (RM) [26] algorithms. The resource manager and the schedulers have been implemented in the HP's Hotspot server [14]. Our measurement and simulation results show a significant improvement in power dissipation and QoS of Bluetooth and 802.11b for applications such as MP3, MPEG4, WWW, and email.

  7. Managing heterogeneous wireless environments via Hotspot servers

    NASA Astrophysics Data System (ADS)

    Simunic, Tajana; Qadeer, Wajahat; De Micheli, Giovanni

    2004-12-01

    Wireless communication today supports heterogeneous wireless devices with a number of different wireless network interfaces (WNICs). A large fraction of communication is infrastructure based, so the wireless access points and hotspot servers have become more ubiquitous. Battery lifetime is still a critical issue, with WNICs typically consuming a large fraction of the overall power budget in a mobile device. In this work we present a new technique for managing power consumption and QoS in diverse wireless environments using Hotspot servers. We introduce a resource manager module at both Hotspot server and the client. Resource manager schedules communication bursts between it and each client. The schedulers decide what WNIC to employ for communication, when to communicate data and how to minimize power dissipation while maintaining an acceptable QoS based on the application needs. We present two new scheduling policies derived from well known earliest deadline first (EDF) and rate monotonic (RM) [26] algorithms. The resource manager and the schedulers have been implemented in the HP's Hotspot server [14]. Our measurement and simulation results show a significant improvement in power dissipation and QoS of Bluetooth and 802.11b for applications such as MP3, MPEG4, WWW, and email.

  8. Virtual venue management users manual : access grid toolkit documentation, version 2.3.

    SciTech Connect

    Judson, I. R.; Lefvert, S.; Olson, E.; Uram, T. D.; Mathematics and Computer Science

    2007-10-24

    An Access Grid Venue Server provides access to individual Virtual Venues, virtual spaces where users can collaborate using the Access Grid Venue Client software. This manual describes the Venue Server component of the Access Grid Toolkit, version 2.3. Covered here are the basic operations of starting a venue server, modifying its configuration, and modifying the configuration of the individual venues.

  9. Visible Human Slice Web Server: a first assessment

    NASA Astrophysics Data System (ADS)

    Hersch, Roger D.; Gennart, Benoit A.; Figueiredo, Oscar; Mazzariol, Marc; Tarraga, Joaquin; Vetsch, S.; Messerli, Vincent; Welz, R.; Bidaut, Luc M.

    1999-12-01

    The Visible Human Slice Server started offering its slicing services at the end of June 1998. From that date until the end of May, more than 280,000 slices were extracted from the Visible Man, by layman interested in anatomy, by students and by specialists. The Slice Server is based one Bi-Pentium PC and 16 disks. It is a scaled down version of a powerful parallel server comprising 5 Bi-Pentium Pro PCs and 60 disks. The parallel server program was created thanks to a computer-aided parallelization framework, which takes over the task of creating a multi-threaded pipelined parallel program from a high-level parallel program description. On the full blown architecture, the parallel program enables the extraction and resampling of up to 5 color slices per second. Extracting 5 slice/s requires to access the disks and extract subvolumes of the Visible Human at an aggregate throughput of 105 MB/s. The publicly accessible server enables to extract slices having any orientation. The slice position and orientation can either be specified for each slice separately or as a position and orientation offered by a Java applet and possible future improvements. In the very near future, the Web Slice Server will offer additional services, such as the possibility to extract ruled surfaces and to extract animations incorporating slices perpendicular to a user defined trajectory.

  10. The Uppsala Electron-Density Server.

    PubMed

    Kleywegt, Gerard J; Harris, Mark R; Zou, Jin Yu; Taylor, Thomas C; Wählby, Anders; Jones, T Alwyn

    2004-12-01

    The Uppsala Electron Density Server (EDS; http://eds.bmc.uu.se/) is a web-based facility that provides access to electron-density maps and statistics concerning the fit of crystal structures and their maps. Maps are available for approximately 87% of the crystallographic Protein Data Bank (PDB) entries for which structure factors have been deposited and for which straightforward map calculations succeed in reproducing the published R value to within five percentage points. Here, an account is provided of the methods that are used to generate the information contained in the server. Some of the problems that are encountered in the map-generation process as well as some spin-offs of the project are also discussed.

  11. The HydroServer Platform for Sharing Hydrologic Data

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its

  12. CommServer: A Communications Manager For Remote Data Sites

    NASA Astrophysics Data System (ADS)

    Irving, K.; Kane, D. L.

    2012-12-01

    CommServer is a software system that manages making connections to remote data-gathering stations, providing a simple network interface to client applications. The client requests a connection to a site by name, and the server establishes the connection, providing a bidirectional channel between the client and the target site if successful. CommServer was developed to manage networks of FreeWave serial data radios with multiple data sites, repeaters, and network-accessed base stations, and has been in continuous operational use for several years. Support for Iridium modems using RUDICS will be added soon, and no changes to the application interface are anticipated. CommServer is implemented on Linux using programs written in bash shell, Python, Perl, AWK, under a set of conventions we refer to as ThinObject.

  13. The SDSS data archive server

    SciTech Connect

    Neilsen, Eric H., Jr.; /Fermilab

    2007-10-01

    The Sloan Digital Sky Survey (SDSS) Data Archive Server (DAS) provides public access to data files produced by the SDSS data reduction pipeline. This article discusses challenges in public distribution of data of this volume and complexity, and how the project addressed them. The Sloan Digital Sky Survey (SDSS)1 is an astronomical survey of covering roughly one quarter of the night sky. It contains images of this area, a catalog of almost 300 million objects detected in those images, and spectra of more than a million of these objects. The catalog of objects includes a variety of data on each object. These data include not only basic information but also fit parameters for a variety of models, classifications by sophisticated object classification algorithms, statistical parameters, and more. If the survey contains the spectrum of an object, the catalog includes a variety of other parameters derived from its spectrum. Data processing and catalog generation, described more completely in the SDSS Early Data Release2 paper, consists of several stages: collection of imaging data, processing of imaging data, selection of spectroscopic targets from catalogs generated from the imaging data, collection of spectroscopic data, processing of spectroscopic data, and loading of processed data into a database. Each of these stages is itself a complex process. For example, the software that processes the imaging data determines and removes some instrumental signatures in the raw images to create 'corrected frames', models the point spread function, models and removes the sky background, detects objects, measures object positions, measures the radial profile and other morphological parameters for each object, measures the brightness of each object using a variety of methods, classifies the objects, calibrates the brightness measurements against survey standards, and produces a variety of quality assurance plots and diagnostic tables. The complexity of the spectroscopic data

  14. PACS image security server

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.

    2004-04-01

    Medical image security in a PACS environment has become a pressing issue as communications of images increasingly extends over open networks, and hospitals are currently hard-pushed by Health Insurance Portability and Accountability Act (HIPAA) to be HIPPA complaint for ensuring health data security. Other security-related guidelines and technical standards continue bringing to the public attention in healthcare. However, there is not an infrastructure or systematic method to implement and deploy these standards in a PACS. In this paper, we first review DICOM Part15 standard for secure communications of medical images and the HIPAA impacts on PACS security, as well as our previous works on image security. Then we outline a security infrastructure in a HIPAA mandated PACS environment using a dedicated PACS image security server. The server manages its own database of all image security information. It acts as an image Authority for checking and certificating the image origin and integrity upon request by a user, as a secure DICOM gateway to the outside connections and meanwhile also as a PACS operation monitor for HIPAA supporting information.

  15. Russian and CIS Library Internet Service: An Analysis of WWW-Server Development.

    ERIC Educational Resources Information Center

    Shraiberg, Yakov

    This paper traces the expansion of the Internet into Russian and Commonwealth of Independent States (CIS) libraries from basic access to the development of World Wide Web (WWW) servers. An analysis of the most representative groups of library WWW-servers arranged by projects, by corporate library network, or by geographical characteristics is…

  16. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    NASA Astrophysics Data System (ADS)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on

  17. PDS: A Performance Database Server

    DOE PAGES

    Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; Letsche, Todd A.

    1994-01-01

    The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less

  18. A Web Server for MACCS Magnetometer Data

    NASA Technical Reports Server (NTRS)

    Engebretson, Mark J.

    1998-01-01

    NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.

  19. Creating a GIS data server on the World Wide Web: The GISST example

    SciTech Connect

    Pace, P.J.; Evers, T.K.

    1996-01-01

    In an effort to facilitate user access to Geographic Information Systems (GIS) data, the GIS and Computer Modeling Group from the Computational Physics and Engineering Division at the Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee (TN), has developed a World Wide Web server named GISST. The server incorporates a highly interactive and dynamic forms-based interface to browse and download a variety of GIS data types. This paper describes the server`s design considerations, development, resulting implementation and future enhancements.

  20. NEOS server 4.0 administrative guide.

    SciTech Connect

    Dolan, E. D.

    2001-07-13

    The NEOS Server 4.0 provides a general Internet-based client/server as a link between users and software applications. The administrative guide covers the fundamental principals behind the operation of the NEOS Server, installation and trouble-shooting of the Server software, and implementation details of potential interest to a NEOS Server administrator. The guide also discusses making new software applications available through the Server, including areas of concern to remote solver administrators such as maintaining security, providing usage instructions, and enforcing reasonable restrictions on jobs. The administrative guide is intended both as an introduction to the NEOS Server and as a reference for use when running the Server.

  1. Purge Lock Server

    2012-08-21

    The software provides a simple web api to allow users to request a time window where a file will not be removed from cache. HPSS provides the concept of a "purge lock". When a purge lock is set on a file, the file will not be removed from disk, entering tape only state. A lot of network file protocols assume a file is on disk so it is good to purge lock a file beforemore » transferring using one of those protocols. HPSS's purge lock system is very coarse grained though. A file is either purge locked or not. Nothing enforces quotas, timely unlocking of purge locks, or managing the races inherent with multiple users wanting to lock/unlock the same file. The Purge Lock Server lets you, through a simple REST API, specify a list of files to purge lock and an expire time, and the system will ensure things happen properly.« less

  2. A Predictive Performance Model to Evaluate the Contention Cost in Application Servers

    SciTech Connect

    Chen, Shiping; Gorton, Ian )

    2002-12-04

    In multi-tier enterprise systems, application servers are key components that implement business logic and provide application services. To support a large number of simultaneous accesses from clients over the Internet and intranet, most application servers use replication and multi-threading to handle concurrent requests. While multiple processes and multiple threads enhance the processing bandwidth of servers, they also increase the contention for resources in application servers. This paper investigates this issue empirically based on a middleware benchmark. A cost model is proposed to estimate the overall performance of application servers, including the contention overhead. This model is then used to determine the optimal degree of the concurrency of application servers for a specific client load. A case study based on CORBA is presented to validate our model and demonstrate its application.

  3. Server-side Filtering and Aggregation within a Distributed Environment

    NASA Astrophysics Data System (ADS)

    Currey, J. C.; Bartle, A.

    2015-12-01

    Intercalibration, validation, and data mining use cases require more efficient access to the massive volumes of observation data distributed across multiple agency data centers. The traditional paradigm of downloading large volumes of data to a centralized server or desktop computer for analysis is no longer viable. More analysis should be performed within the host data centers using server-side functions. Many comparative analysis tasks require far less than 1% of the available observation data. The Multi-Instrument Intercalibration (MIIC) Framework provides web services to find, match, filter, and aggregate multi-instrument observation data. Matching measurements from separate spacecraft in time, location, wavelength, and viewing geometry is a difficult task especially when data are distributed across multiple agency data centers. Event prediction services identify near coincident measurements with matched viewing geometries near orbit crossings using complex orbit propagation and spherical geometry calculations. The number and duration of event opportunities depend on orbit inclinations, altitude differences, and requested viewing conditions (e.g., day/night). Event observation information is passed to remote server-side functions to retrieve matched data. Data may be gridded, spatially convolved onto instantaneous field-of-views, or spectrally resampled or convolved. Narrowband instruments are routinely compared to hyperspectal instruments such as AIRS and CRIS using relative spectral response (RSR) functions. Spectral convolution within server-side functions significantly reduces the amount of hyperspectral data needed by the client. This combination of intelligent selection and server-side processing significantly reduces network traffic and data to process on local servers. OPeNDAP is a mature networking middleware already deployed at many of the Earth science data centers. Custom OPeNDAP server-side functions that provide filtering, histogram analysis (1D

  4. Design of Accelerator Online Simulator Server Using Structured Data

    SciTech Connect

    Shen, Guobao; Chu, Chungming; Wu, Juhao; Kraimer, Martin; /Argonne

    2012-07-06

    Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describes the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.

  5. The SAPHIRE server: a new algorithm and implementation.

    PubMed Central

    Hersh, W.; Leone, T. J.

    1995-01-01

    SAPHIRE is an experimental information retrieval system implemented to test new approaches to automated indexing and retrieval of medical documents. Due to limitations in its original concept-matching algorithm, a modified algorithm has been implemented which allows greater flexibility in partial matching and different word order within concepts. With the concomitant growth in client-server applications and the Internet in general, the new algorithm has been implemented as a server that can be accessed via other applications on the Internet. PMID:8563413

  6. Compute Server Performance Results

    NASA Technical Reports Server (NTRS)

    Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,

  7. Preparing for the New Remote Access.

    ERIC Educational Resources Information Center

    Taylor, William E.

    1997-01-01

    Integrated remote access servers support many different types of access. Remote access has been integrated as a strategic tool as application developers build remote access capabilities into their software. Discusses demands of using remote access as a strategic component and management matters. (AEF)

  8. Advancing the Power and Utility of Server-Side Aggregation

    NASA Technical Reports Server (NTRS)

    Fulker, Dave; Gallagher, James

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters.Topics 2 through 7 will be relevant to data consumers, data providers andnotably, due to the open-source nature of all OPeNDAP softwareto developers wishing to extend Hyrax, to build compatible clients and servers, andor to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  9. APPRIS WebServer and WebServices

    PubMed Central

    Rodriguez, Jose Manuel; Carro, Angel; Valencia, Alfonso; Tress, Michael L.

    2015-01-01

    This paper introduces the APPRIS WebServer (http://appris.bioinfo.cnio.es) and WebServices (http://apprisws.bioinfo.cnio.es). Both the web servers and the web services are based around the APPRIS Database, a database that presently houses annotations of splice isoforms for five different vertebrate genomes. The APPRIS WebServer and WebServices provide access to the computational methods implemented in the APPRIS Database, while the APPRIS WebServices also allows retrieval of the annotations. The APPRIS WebServer and WebServices annotate splice isoforms with protein structural and functional features, and with data from cross-species alignments. In addition they can use the annotations of structure, function and conservation to select a single reference isoform for each protein-coding gene (the principal protein isoform). APPRIS principal isoforms have been shown to agree overwhelmingly with the main protein isoform detected in proteomics experiments. The APPRIS WebServer allows for the annotation of splice isoforms for individual genes, and provides a range of visual representations and tools to allow researchers to identify the likely effect of splicing events. The APPRIS WebServices permit users to generate annotations automatically in high throughput mode and to interrogate the annotations in the APPRIS Database. The APPRIS WebServices have been implemented using REST architecture to be flexible, modular and automatic. PMID:25990727

  10. RNAiFold: a web server for RNA inverse folding and molecular design

    PubMed Central

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-01-01

    Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website. PMID:23700314

  11. RNAiFold: a web server for RNA inverse folding and molecular design.

    PubMed

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-07-01

    Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.

  12. Generic OPC UA Server Framework

    NASA Astrophysics Data System (ADS)

    Nikiel, Piotr P.; Farnham, Benjamin; Filimonov, Viatcheslav; Schlenker, Stefan

    2015-12-01

    This paper describes a new approach for generic design and efficient development of OPC UA servers. Development starts with creation of a design file, in XML format, describing an object-oriented information model of the target system or device. Using this model, the framework generates an executable OPC UA server application, which exposes the per-design OPC UA address space, without the developer writing a single line of code. Furthermore, the framework generates skeleton code into which the developer adds the necessary logic for integration to the target system or device. This approach allows both developers unfamiliar with the OPC UA standard, and advanced OPC UA developers, to create servers for the systems they are experts in while greatly reducing design and development effort as compared to developments based purely on COTS OPC UA toolkits. Higher level software may further benefit from the explicit OPC UA server model by using the XML design description as the basis for generating client connectivity configuration and server data representation. Moreover, having the XML design description at hand facilitates automatic generation of validation tools. In this contribution, the concept and implementation of this framework is detailed along with examples of actual production-level usage in the detector control system of the ATLAS experiment at CERN and beyond.

  13. WAViS server for handling, visualization and presentation of multiple alignments of nucleotide or amino acids sequences.

    PubMed

    Zika, Radek; Paces, Jan; Pavlícek, Adam; Paces, Václav

    2004-07-01

    Web Alignment Visualization Server contains a set of web-tools designed for quick generation of publication-quality color figures of multiple alignments of nucleotide or amino acids sequences. It can be used for identification of conserved regions and gaps within many sequences using only common web browsers. The server is accessible at http://wavis.img.cas.cz.

  14. Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology

    NASA Astrophysics Data System (ADS)

    Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; King, Todd; Hughes, John; Fung, Shing; Galkin, Ivan; Hapgood, Michael; Belehaki, Anna

    2015-04-01

    Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology European Union ESPAS, Japanese IUGONET and GFZ ISDC data server are developed for the ingestion, archiving and distributing of geo and space science domain data. Main parts of the data -managed by the mentioned data server- are related to near earth-space and geomagnetic field data. A smart mashup of the data server would allow a seamless browse and access to data and related context information. However the achievement of a high level of interoperability is a challenge because the data server are based on different data models and software frameworks. This paper is focused on the latest experiments and results for the mashup of the data server using the semantic Web approach. Besides the mashup of domain and terminological ontologies, especially the options to connect data managed by relational databases using D2R server and SPARQL technology will be addressed. A successful realization of the data server mashup will not only have a positive impact to the data users of the specific scientific domain but also to related projects, such as e.g. the development of a new interoperable version of NASA's Planetary Data System (PDS) or ICUS's World Data System alliance. ESPAS data server: https://www.espas-fp7.eu/portal/ IUGONET data server: http://search.iugonet.org/iugonet/ GFZ ISDC data server (semantic Web based prototype): http://rz-vm30.gfz-potsdam.de/drupal-7.9/ NASA PDS: http://pds.nasa.gov ICSU-WDS: https://www.icsu-wds.org

  15. Identifying and Analyzing Web Server Attacks

    SciTech Connect

    Seifert, Christian; Endicott-Popovsky, Barbara E.; Frincke, Deborah A.; Komisarczuk, Peter; Muschevici, Radu; Welch, Ian D.

    2008-08-29

    Abstract: Client honeypots can be used to identify malicious web servers that attack web browsers and push malware to client machines. Merely recording network traffic is insufficient to perform comprehensive forensic analyses of such attacks. Custom tools are required to access and analyze network protocol data. Moreover, specialized methods are required to perform a behavioral analysis of an attack, which helps determine exactly what transpired on the attacked system. This paper proposes a record/replay mechanism that enables forensic investigators to extract application data from recorded network streams and allows applications to interact with this data in order to conduct behavioral analyses. Implementations for the HTTP and DNS protocols are presented and their utility in network forensic investigations is demonstrated.

  16. The widest practicable dissemination: The NASA technical report server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael; Accomazzi, Alberto

    1995-01-01

    The search for innovative methods to distribute NASA's information lead a gross-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial 6-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  17. NOBAI: a web server for character coding of geometrical and statistical features in RNA structure.

    PubMed

    Knudsen, Vegeir; Caetano-Anollés, Gustavo

    2008-07-01

    The Numeration of Objects in Biology: Alignment Inferences (NOBAI) web server provides a web interface to the applications in the NOBAI software package. This software codes topological and thermodynamic information related to the secondary structure of RNA molecules as multi-state phylogenetic characters, builds character matrices directly in NEXUS format and provides sequence randomization options. The web server is an effective tool that facilitates the search for evolutionary history embedded in the structure of functional RNA molecules. The NOBAI web server is accessible at 'http://www.manet.uiuc.edu/nobai/nobai.php'. This web site is free and open to all users and there is no login requirement.

  18. Hybrid metrology implementation: server approach

    NASA Astrophysics Data System (ADS)

    Osorio, Carmen; Timoney, Padraig; Vaid, Alok; Elia, Alex; Kang, Charles; Bozdog, Cornel; Yellai, Naren; Grubner, Eyal; Ikegami, Toru; Ikeno, Masahiko

    2015-03-01

    Hybrid metrology (HM) is the practice of combining measurements from multiple toolset types in order to enable or improve metrology for advanced structures. HM is implemented in two phases: Phase-1 includes readiness of the infrastructure to transfer processed data from the first toolset to the second. Phase-2 infrastructure allows simultaneous transfer and optimization of raw data between toolsets such as spectra, images, traces - co-optimization. We discuss the extension of Phase-1 to include direct high-bandwidth communication between toolsets using a hybrid server, enabling seamless fab deployment and further laying the groundwork for Phase-2 high volume manufacturing (HVM) implementation. An example of the communication protocol shows the information that can be used by the hybrid server, differentiating its capabilities from that of a host-based approach. We demonstrate qualification and production implementation of the hybrid server approach using CD-SEM and OCD toolsets for complex 20nm and 14nm applications. Finally we discuss the roadmap for Phase-2 HM implementation through use of the hybrid server.

  19. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  20. Nuke@ - a nuclear information internet server

    SciTech Connect

    Slone, B.J. III.; Richardson, C.E.

    1994-12-31

    To facilitate Internet communications between nuclear utilities, vendors, agencies, and other interested parties, an Internet server is being established. This server will provide the nuclear industry with its first file-transfer protocol (ftp) connection point, its second mail server, and a potential telnet connection location.

  1. Aviation System Analysis Capability Quick Response System Report Server User's Guide

    NASA Technical Reports Server (NTRS)

    Roberts, Eileen R.; Villani, James A.; Wingrove, Earl R., III

    1996-01-01

    This report is a user's guide for the Aviation System Analysis Capability Quick Response System (ASAC QRS) Report Server. The ASAC QRS is an automated online capability to access selected ASAC models and data repositories. It supports analysis by the aviation community. This system was designed by the Logistics Management Institute for the NASA Ames Research Center. The ASAC QRS Report Server allows users to obtain information stored in the ASAC Data Repositories.

  2. Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco

    2014-05-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.

  3. CCTOP: a Consensus Constrained TOPology prediction web server.

    PubMed

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided.

  4. CCTOP: a Consensus Constrained TOPology prediction web server

    PubMed Central

    Dobson, László; Reményi, István; Tusnády, Gábor E.

    2015-01-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. PMID:25943549

  5. High-Performance Tiled WMS and KML Web Server

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  6. File servers, networking, and supercomputers

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    1991-01-01

    One of the major tasks of a supercomputer center is managing the massive amount of data generated by application codes. A data flow analysis of the San Diego Supercomputer Center is presented that illustrates the hierarchical data buffering/caching capacity requirements and the associated I/O throughput requirements needed to sustain file service and archival storage. Usage paradigms are examined for both tightly-coupled and loosely-coupled file servers linked to the supercomputer by high-speed networks.

  7. File servers, networking, and supercomputers

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    1992-01-01

    One of the major tasks of a supercomputer center is managing the massive amount of data generated by application codes. A data flow analysis of the San Diego Supercomputer Center is presented that illustrates the hierarchical data buffering/caching capacity requirements and the associated I/O throughput requirements needed to sustain file service and archival storage. Usage paradigms are examined for both tightly-coupled and loosely-coupled file servers linked to the supercomputer by high-speed networks.

  8. The NASA Technical Report Server

    NASA Astrophysics Data System (ADS)

    Nelson, M. L.; Gottlich, G. L.; Bianco, D. J.; Paulson, S. S.; Binkley, R. L.; Kellogg, Y. D.; Beaumont, C. J.; Schmunk, R. B.; Kurtz, M. J.; Accomazzi, A.; Syed, O.

    The National Aeronautics and Space Act of 1958 established the National Aeronautics and Space Administration (NASA) and charged it to "provide for the widest practicable and appropriate dissemination of information concerning...its activities and the results thereof". The search for innovative methods to distribute NASA's information led a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems .

  9. Client-Server Password Recovery

    NASA Astrophysics Data System (ADS)

    Chmielewski, Łukasz; Hoepman, Jaap-Henk; van Rossum, Peter

    Human memory is not perfect - people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the password. These protocols can be easily adapted to the personal entropy setting [7], where a user can recover a password only if he can answer a large enough subset of personal questions.

  10. An integrated medical image database and retrieval system using a web application server.

    PubMed

    Cao, Pengyu; Hashiba, Masao; Akazawa, Kouhei; Yamakawa, Tomoko; Matsuto, Takayuki

    2003-08-01

    We developed an Integrated Medical Image Database and Retrieval System (INIS) for easy access by medical staff. The INIS mainly consisted of four parts: specific servers to save medical images from multi-vendor modalities of CT, MRI, CR, ECG and endoscopy; an integrated image database (DB) server to save various kinds of images in a DICOM format; a Web application server to connect clients to the integrated image DB and the Web browser terminals connected to an HIS system. The INIS provided a common screen design to retrieve CT, MRI, CR, endoscopic and ECG images, and radiological reports, which would allow doctors to retrieve radiological images and corresponding reports, or ECG images of a patient simultaneously on a screen. Doctors working in internal medicine on average accessed information 492 times a month. Doctors working in cardiological and gastroenterological accessed information 308 times a month. Using the INIS, medical staff could browse all or parts of a patient's medical images and reports.

  11. Mining the SDSS SkyServer SQL queries log

    NASA Astrophysics Data System (ADS)

    Hirota, Vitor M.; Santos, Rafael; Raddick, Jordan; Thakar, Ani

    2016-05-01

    SkyServer, the Internet portal for the Sloan Digital Sky Survey (SDSS) astronomic catalog, provides a set of tools that allows data access for astronomers and scientific education. One of SkyServer data access interfaces allows users to enter ad-hoc SQL statements to query the catalog. SkyServer also presents some template queries that can be used as basis for more complex queries. This interface has logged over 330 million queries submitted since 2001. It is expected that analysis of this data can be used to investigate usage patterns, identify potential new classes of queries, find similar queries, etc. and to shed some light on how users interact with the Sloan Digital Sky Survey data and how scientists have adopted the new paradigm of e-Science, which could in turn lead to enhancements on the user interfaces and experience in general. In this paper we review some approaches to SQL query mining, apply the traditional techniques used in the literature and present lessons learned, namely, that the general text mining approach for feature extraction and clustering does not seem to be adequate for this type of data, and, most importantly, we find that this type of analysis can result in very different queries being clustered together.

  12. A distributed clients/distributed servers model for STARCAT

    NASA Technical Reports Server (NTRS)

    Pirenne, B.; Albrecht, M. A.; Durand, D.; Gaudet, S.

    1992-01-01

    STARCAT, the Space Telescope ARchive and CATalogue user interface has been along for a number of years already. During this time it has been enhanced and augmented in a number of different fields. This time, we would like to dwell on a new capability allowing geographically distributed user interfaces to connect to geographically distributed data servers. This new concept permits users anywhere on the internet running STARCAT on their local hardware to access e.g., whichever of the 3 existing HST archive sites is available, or get information on the CFHT archive through a transparent connection to the CADC in BC or to get the La Silla weather by connecting to the ESO database in Munich during the same session. Similarly PreView (or quick look) images and spectra will also flow directly to the user from wherever it is available. Moving towards an 'X'-based STARCAT is another goal being pursued: a graphic/image server and a help/doc server are currently being added to it. They should further enhance the user independence and access transparency.

  13. fastSCOP: a fast web server for recognizing protein structural domains and SCOP superfamilies.

    PubMed

    Tung, Chi-Hua; Yang, Jinn-Moon

    2007-07-01

    The fastSCOP is a web server that rapidly identifies the structural domains and determines the evolutionary superfamilies of a query protein structure. This server uses 3D-BLAST to scan quickly a large structural classification database (SCOP1.71 with <95% identity with each other) and the top 10 hit domains, which have different superfamily classifications, are obtained from the hit lists. MAMMOTH, a detailed structural alignment tool, is adopted to align these top 10 structures to refine domain boundaries and to identify evolutionary superfamilies. Our previous works demonstrated that 3D-BLAST is as fast as BLAST, and has the characteristics of BLAST (e.g. a robust statistical basis, effective search and reliable database search capabilities) in large structural database searches based on a structural alphabet database and a structural alphabet substitution matrix. The classification accuracy of this server is approximately 98% for 586 query structures and the average execution time is approximately 5. This server was also evaluated on 8700 structures, which have no annotations in the SCOP; the server can automatically assign 7311 (84%) proteins (9420 domains) to the SCOP superfamilies in 9.6 h. These results suggest that the fastSCOP is robust and can be a useful server for recognizing the evolutionary classifications and the protein functions of novel structures. The server is accessible at http://fastSCOP.life.nctu.edu.tw.

  14. The SAMGrid database server component: its upgraded infrastructure and future development path

    SciTech Connect

    Loebel-Carpenter, L.; White, S.; Baranovski, A.; Garzoglio, G.; Herber, R.; Illingworth, R.; Kennedy, R.; Kreymer, A.; Kumar, A.; Lueking, L.; Lyon, A.; Merritt, W.; Terekhov, I.; Trumbo, J.; Veseli, S.; Burgon-Lyon, M.; St. Denis, R.; Belforte, S.; Kerzel, U.; Bartsch, V.; Leslie, M.; /Oxford U. /Rutgers U., Piscataway /Texas Tech.

    2004-12-01

    The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes required for the unified metadata catalog has warranted a complete redesign of the DB Server. We describe here the architecture and features of the new server. In particular, we discuss the new CORBA infrastructure that utilizes python wrapper classes around IDL structs and exceptions. Such infrastructure allows us to use the same code on both server and client sides, which in turn results in significantly improved code maintainability and easier development. We also discuss future integration of the new server with an SBIR II project which is directed toward allowing the DB Server to access distributed databases, implemented in different DB systems and possibly using different schema.

  15. Parmodel: a web server for automated comparative modeling of proteins.

    PubMed

    Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira

    2004-12-24

    Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .

  16. The HADDOCK web server for data-driven biomolecular docking.

    PubMed

    de Vries, Sjoerd J; van Dijk, Marc; Bonvin, Alexandre M J J

    2010-05-01

    Computational docking is the prediction or modeling of the three-dimensional structure of a biomolecular complex, starting from the structures of the individual molecules in their free, unbound form. HADDOCK is a popular docking program that takes a data-driven approach to docking, with support for a wide range of experimental data. Here we present the HADDOCK web server protocol, facilitating the modeling of biomolecular complexes for a wide community. The main web interface is user-friendly, requiring only the structures of the individual components and a list of interacting residues as input. Additional web interfaces allow the more advanced user to exploit the full range of experimental data supported by HADDOCK and to customize the docking process. The HADDOCK server has access to the resources of a dedicated cluster and of the e-NMR GRID infrastructure. Therefore, a typical docking run takes only a few minutes to prepare and a few hours to complete.

  17. High performance medical image processing in client/server-environments.

    PubMed

    Mayer, A; Meinzer, H P

    1999-03-01

    As 3D scanning devices like computer tomography (CT) or magnetic resonance imaging (MRI) become more widespread, there is also an increasing need for powerful computers that can handle the enormous amounts of data with acceptable response times. We describe an approach to parallelize some of the more frequently used image processing operators on distributed memory architectures. It is desirable to make such specialized machines accessible on a network, in order to save costs by sharing resources. We present a client/server approach that is specifically tailored to the interactive work with volume data. Our image processing server implements a volume visualization method that allows the user to assess the segmentation of anatomical structures. We can enhance the presentation by combining the volume visualizations on a viewing station with additional graphical elements, which can be manipulated in real-time. The methods presented were verified on two applications for different domains. PMID:10094225

  18. Running the Sloan Digital Sky Survey data archive server

    SciTech Connect

    Neilsen, Eric H., Jr.; Stoughton, Chris; /Fermilab

    2006-11-01

    The Sloan Digital Sky Survey (SDSS) Data Archive Server (DAS) provides public access to over 12Tb of data in 17 million files produced by the SDSS data reduction pipeline. Many tasks which seem trivial when serving smaller, less complex data sets present challenges when serving data of this volume and technical complexity. The included output files should be chosen to support as much science as possible from publicly released data, and only publicly released data. Users must have the resources needed to read and interpret the data correctly. Server administrators must generate new data releases at regular intervals, monitor usage, quickly recover from hardware failures, and monitor the data served by the DAS both for contents and corruption. We discuss these challenges, describe tools we use to administer and support the DAS, and discuss future development plans.

  19. Running the Sloan Digital Sky Survey Data Archive Server

    NASA Astrophysics Data System (ADS)

    Neilsen, E. H., Jr.; Stoughton, C.

    2007-10-01

    The Sloan Digital Sky Survey (SDSS) Data Archive Server (DAS) provides public access to over 12~Tb of data in 17 million files produced by the SDSS data reduction pipeline. Many tasks that seem trivial when serving smaller, less complex data sets present challenges when serving data of this volume and technical complexity. The included output files should be chosen to support as much science as possible from publicly released data, and only publicly released data. Users must have the resources needed to read and interpret the data correctly. Server administrators must generate new data releases at regular intervals, monitor usage, quickly recover from hardware failures, and monitor the data served by the DAS both for content and corruption. We discuss these challenges, describe tools we use to administer and support the DAS, and discuss future development plans.

  20. The Medicago truncatula gene expression atlas web server

    PubMed Central

    2009-01-01

    Background Legumes (Leguminosae or Fabaceae) play a major role in agriculture. Transcriptomics studies in the model legume species, Medicago truncatula, are instrumental in helping to formulate hypotheses about the role of legume genes. With the rapid growth of publically available Affymetrix GeneChip Medicago Genome Array GeneChip data from a great range of tissues, cell types, growth conditions, and stress treatments, the legume research community desires an effective bioinformatics system to aid efforts to interpret the Medicago genome through functional genomics. We developed the Medicago truncatula Gene Expression Atlas (MtGEA) web server for this purpose. Description The Medicago truncatula Gene Expression Atlas (MtGEA) web server is a centralized platform for analyzing the Medicago transcriptome. Currently, the web server hosts gene expression data from 156 Affymetrix GeneChip® Medicago genome arrays in 64 different experiments, covering a broad range of developmental and environmental conditions. The server enables flexible, multifaceted analyses of transcript data and provides a range of additional information about genes, including different types of annotation and links to the genome sequence, which help users formulate hypotheses about gene function. Transcript data can be accessed using Affymetrix probe identification number, DNA sequence, gene name, functional description in natural language, GO and KEGG annotation terms, and InterPro domain number. Transcripts can also be discovered through co-expression or differential expression analysis. Flexible tools to select a subset of experiments and to visualize and compare expression profiles of multiple genes have been implemented. Data can be downloaded, in part or full, in a tabular form compatible with common analytical and visualization software. The web server will be updated on a regular basis to incorporate new gene expression data and genome annotation, and is accessible at: http

  1. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.

  2. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.

  3. JPred4: a protein secondary structure prediction server.

    PubMed

    Drozdetskiy, Alexey; Cole, Christian; Procter, James; Barton, Geoffrey J

    2015-07-01

    JPred4 (http://www.compbio.dundee.ac.uk/jpred4) is the latest version of the popular JPred protein secondary structure prediction server which provides predictions by the JNet algorithm, one of the most accurate methods for secondary structure prediction. In addition to protein secondary structure, JPred also makes predictions of solvent accessibility and coiled-coil regions. The JPred service runs up to 94 000 jobs per month and has carried out over 1.5 million predictions in total for users in 179 countries. The JPred4 web server has been re-implemented in the Bootstrap framework and JavaScript to improve its design, usability and accessibility from mobile devices. JPred4 features higher accuracy, with a blind three-state (α-helix, β-strand and coil) secondary structure prediction accuracy of 82.0% while solvent accessibility prediction accuracy has been raised to 90% for residues <5% accessible. Reporting of results is enhanced both on the website and through the optional email summaries and batch submission results. Predictions are now presented in SVG format with options to view full multiple sequence alignments with and without gaps and insertions. Finally, the help-pages have been updated and tool-tips added as well as step-by-step tutorials. PMID:25883141

  4. JPred4: a protein secondary structure prediction server

    PubMed Central

    Drozdetskiy, Alexey; Cole, Christian; Procter, James; Barton, Geoffrey J.

    2015-01-01

    JPred4 (http://www.compbio.dundee.ac.uk/jpred4) is the latest version of the popular JPred protein secondary structure prediction server which provides predictions by the JNet algorithm, one of the most accurate methods for secondary structure prediction. In addition to protein secondary structure, JPred also makes predictions of solvent accessibility and coiled-coil regions. The JPred service runs up to 94 000 jobs per month and has carried out over 1.5 million predictions in total for users in 179 countries. The JPred4 web server has been re-implemented in the Bootstrap framework and JavaScript to improve its design, usability and accessibility from mobile devices. JPred4 features higher accuracy, with a blind three-state (α-helix, β-strand and coil) secondary structure prediction accuracy of 82.0% while solvent accessibility prediction accuracy has been raised to 90% for residues <5% accessible. Reporting of results is enhanced both on the website and through the optional email summaries and batch submission results. Predictions are now presented in SVG format with options to view full multiple sequence alignments with and without gaps and insertions. Finally, the help-pages have been updated and tool-tips added as well as step-by-step tutorials. PMID:25883141

  5. Toward rational protein crystallization: A Web server for the design of crystallizable protein variants

    PubMed Central

    Goldschmidt, Lukasz; Cooper, David R.; Derewenda, Zygmunt S.; Eisenberg, David

    2007-01-01

    Growing well-diffracting crystals constitutes a serious bottleneck in structural biology. A recently proposed crystallization methodology for “stubborn crystallizers” is to engineer surface sequence variants designed to form intermolecular contacts that could support a crystal lattice. This approach relies on the concept of surface entropy reduction (SER), i.e., the replacement of clusters of flexible, solvent-exposed residues with residues with lower conformational entropy. This strategy minimizes the loss of conformational entropy upon crystallization and renders crystallization thermodynamically favorable. The method has been successfully used to crystallize more than 15 novel proteins, all stubborn crystallizers. But the choice of suitable sites for mutagenesis is not trivial. Herein, we announce a Web server, the surface entropy reduction prediction server (SERp server), designed to identify mutations that may facilitate crystallization. Suggested mutations are predicted based on an algorithm incorporating a conformational entropy profile, a secondary structure prediction, and sequence conservation. Minor considerations include the nature of flanking residues and gaps between mutation candidates. While designed to be used with default values, the server has many user-controlled parameters allowing for considerable flexibility. Within, we discuss (1) the methodology of the server, (2) how to interpret the results, and (3) factors that must be considered when selecting mutations. We also attempt to benchmark the server by comparing the server's predictions with successful SER structures. In most cases, the structure yielding mutations were easily identified by the SERp server. The server can be accessed at http://www.doe-mbi.ucla.edu/Services/SER. PMID:17656576

  6. GrayStarServer: Server-side Spectrum Synthesis with a Browser-based Client-side User Interface

    NASA Astrophysics Data System (ADS)

    Short, C. Ian

    2016-10-01

    We present GrayStarServer (GSS), a stellar atmospheric modeling and spectrum synthesis code of pedagogical accuracy that is accessible in any web browser on commonplace computational devices and that runs on a timescale of a few seconds. The addition of spectrum synthesis annotated with line identifications extends the functionality and pedagogical applicability of GSS beyond that of its predecessor, GrayStar3 (GS3). The spectrum synthesis is based on a line list acquired from the NIST atomic spectra database, and the GSS post-processing and user interface client allows the user to inspect the plain text ASCII version of the line list, as well as to apply macroscopic broadening. Unlike GS3, GSS carries out the physical modeling on the server side in Java, and communicates with the JavaScript and HTML client via an asynchronous HTTP request. We also describe other improvements beyond GS3 such as a more physical treatment of background opacity and atmospheric physics, the comparison of key results with those of the Phoenix code, and the use of the HTML < {canvas}> element for higher quality plotting and rendering of results. We also present LineListServer, a Java code for converting custom ASCII line lists in NIST format to the byte data type file format required by GSS so that users can prepare their own custom line lists. We propose a standard for marking up and packaging model atmosphere and spectrum synthesis output for data transmission and storage that will facilitate a web-based approach to stellar atmospheric modeling and spectrum synthesis. We describe some pedagogical demonstrations and exercises enabled by easily accessible, on-demand, responsive spectrum synthesis. GSS may serve as a research support tool by providing quick spectroscopic reconnaissance. GSS may be found at www.ap.smu.ca/~ishort/OpenStars/GrayStarServer/grayStarServer.html, and source tarballs for local installations of both GSS and LineListServer may be found at www.ap.smu.ca/~ishort/OpenStars/.

  7. Remote Data Access with IDL

    NASA Technical Reports Server (NTRS)

    Galloy, Michael

    2013-01-01

    A tool based on IDL (Interactive Data Language) and DAP (Data Access Protocol) has been developed for user-friendly remote data access. A difficulty for many NASA researchers using IDL is that often the data to analyze are located remotely and are too large to transfer for local analysis. Researchers have developed a protocol for accessing remote data, DAP, which is used for both SOHO and STEREO data sets. Server-side side analysis via IDL routine is available through DAP.

  8. National Medical Terminology Server in Korea

    NASA Astrophysics Data System (ADS)

    Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee

    Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.

  9. UniTree Name Server internals

    SciTech Connect

    Mecozzi, D.; Minton, J.

    1996-01-01

    The UniTree Name Server (UNS) is one of several servers which make up the UniTree storage system. The Name Server is responsible for mapping names to capabilities Names are generally human readable ASCII strings of any length. Capabilities are unique 256-bit identifiers that point to files, directories, or symbolic links. The Name Server implements a UNIX style hierarchical directory structure to facilitate name-to-capability mapping. The principal task of the Name Server is to manage the directories which make up the UniTree directory structure. The principle clients of the Name Server are the FTP Daemon, NFS and a few UniTree utility routines. However, the Name Server is a generalized server and will accept messages from any client. The purpose of this paper is to describe the internal workings of the UniTree Name Server. In cases where it seems appropriate, the motivation for a particular choice of algorithm as description of the algorithm itself will be given.

  10. WSKE: Web Server Key Enabled Cookies

    NASA Astrophysics Data System (ADS)

    Masone, Chris; Baek, Kwang-Hyun; Smith, Sean

    In this paper, we present the design and prototype of a new approach to cookie management: if a server deposits a cookie only after authenticating itself via the SSL handshake, the browser will return the cookie only to a server that can authenticate itself, via SSL, to the same keypair. This approach can enable usable but secure client authentication. This approach can improve the usability of server authentication by clients. This approach is superior to the prior work on Active Cookies in that it defends against both DNS spoofing and IP spoofing—and does not require binding a user's interaction with a server to individual IP addresses.

  11. PlanetServer/EarthServer: Big Data analytics in Planetary Science

    NASA Astrophysics Data System (ADS)

    Pio Rossi, Angelo; Oosthoek, Jelmer; Baumann, Peter; Beccati, Alan; Cantini, Federico; Misev, Dimitar; Orosei, Roberto; Flahaut, Jessica; Campalani, Piero; Unnithan, Vikram

    2014-05-01

    Planetary data are freely available on PDS/PSA archives and alike (e.g. Heather et al., 2013). Their exploitation by the community is somewhat limited by the variable availability of calibrated/higher level datasets. An additional complexity of these multi-experiment, multi-mission datasets is related to the heterogeneity of data themselves, rather than their volume. Orbital - so far - data are best suited for an inclusion in array databases (Baumann et al., 1994). Most lander- or rover-based remote sensing experiment (and possibly, in-situ as well) are suitable for similar approaches, although the complexity of coordinate reference systems (CRS) is higher in the latter case. PlanetServer, the Planetary Service of the EC FP7 e-infrastructure project EarthServer (http://earthserver.eu) is a state-of-art online data exploration and analysis system based on the Open Geospatial Consortium (OGC) standards for Mars orbital data. It provides access to topographic, panchromatic, multispectral and hyperspectral calibrated data. While its core focus has been on hyperspectral data analysis through the OGC Web Coverage Processing Service (Oosthoek et al., 2013; Rossi et al., 2013), the Service progressively expanded to host also sounding radar data (Cantini et al., this volume). Additionally, both single swath and mosaicked imagery and topographic data are being added to the Service, deriving from the HRSC experiment (e.g. Jaumann et al., 2007; Gwinner et al., 2009) The current Mars-centric focus can be extended to other planetary bodies and most components are general purpose ones, making possible its application to the Moon, Mercury or alike. The Planetary Service of EarthServer is accessible on http://www.planetserver.eu References: Baumann, P. (1994) VLDB J. 4 (3), 401-444, Special Issue on Spatial Database Systems. Cantini, F. et al. (2014) Geophys. Res. Abs., Vol. 16, #EGU2014-3784, this volume Heather, D., et al.(2013) EuroPlanet Sci. Congr. #EPSC2013-626 Gwinner, K

  12. Distributed PACS using distributed file system with hierarchical meta data servers.

    PubMed

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  13. Home media server content management

    NASA Astrophysics Data System (ADS)

    Tokmakoff, Andrew A.; van Vliet, Harry

    2001-07-01

    With the advent of set-top boxes, the convergence of TV (broadcasting) and PC (Internet) is set to enter the home environment. Currently, a great deal of activity is occurring in developing standards (TV-Anytime Forum) and devices (TiVo) for local storage on Home Media Servers (HMS). These devices lie at the heart of convergence of the triad: communications/networks - content/media - computing/software. Besides massive storage capacity and being a communications 'gateway', the home media server is characterised by the ability to handle metadata and software that provides an easy to use on-screen interface and intelligent search/content handling facilities. In this paper, we describe a research prototype HMS that is being developed within the GigaCE project at the Telematica Instituut . Our prototype demonstrates advanced search and retrieval (video browsing), adaptive user profiling and an innovative 3D component of the Electronic Program Guide (EPG) which represents online presence. We discuss the use of MPEG-7 for representing metadata, the use of MPEG-21 working draft standards for content identification, description and rights expression, and the use of HMS peer-to-peer content distribution approaches. Finally, we outline explorative user behaviour experiments that aim to investigate the effectiveness of the prototype HMS during development.

  14. R3D Align web server for global nucleotide to nucleotide alignments of RNA 3D structures

    PubMed Central

    Rahrig, Ryan R.; Petrov, Anton I.; Leontis, Neocles B.; Zirbel, Craig L.

    2013-01-01

    The R3D Align web server provides online access to ‘RNA 3D Align’ (R3D Align), a method for producing accurate nucleotide-level structural alignments of RNA 3D structures. The web server provides a streamlined and intuitive interface, input data validation and output that is more extensive and easier to read and interpret than related servers. The R3D Align web server offers a unique Gallery of Featured Alignments, providing immediate access to pre-computed alignments of large RNA 3D structures, including all ribosomal RNAs, as well as guidance on effective use of the server and interpretation of the output. By accessing the non-redundant lists of RNA 3D structures provided by the Bowling Green State University RNA group, R3D Align connects users to structure files in the same equivalence class and the best-modeled representative structure from each group. The R3D Align web server is freely accessible at http://rna.bgsu.edu/r3dalign/. PMID:23716643

  15. CovalentDock Cloud: a web server for automated covalent docking

    PubMed Central

    Ouyang, Xuchang; Zhou, Shuo; Ge, Zemei; Li, Runtao; Kwoh, Chee Keong

    2013-01-01

    Covalent binding is an important mechanism for many drugs to gain its function. We developed a computational algorithm to model this chemical event and extended it to a web server, the CovalentDock Cloud, to make it accessible directly online without any local installation and configuration. It provides a simple yet user-friendly web interface to perform covalent docking experiments and analysis online. The web server accepts the structures of both the ligand and the receptor uploaded by the user or retrieved from online databases with valid access id. It identifies the potential covalent binding patterns, carries out the covalent docking experiments and provides visualization of the result for user analysis. This web server is free and open to all users at http://docking.sce.ntu.edu.sg/. PMID:23677616

  16. Optimizing the NASA Technical Report Server.

    ERIC Educational Resources Information Center

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    Modifying the NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, has enhanced its performance, protocol support, and human interfacing. This article discusses the original and revised NTRS architecture, sequential and parallel query methods, and wide area information server (WAIS) uniform…

  17. Get the Word Out with List Servers

    ERIC Educational Resources Information Center

    Goldberg, Laurence

    2006-01-01

    In this article, the author details the use of electronic mail server in their school. In their school district of about 7,300 students in suburban Philadelphia (Abington SD), electronic mail list servers are now being used, along with other methods of communication, to disseminate information quickly and widely. They began by manually maintaining…

  18. You're a What? Process Server

    ERIC Educational Resources Information Center

    Torpey, Elka

    2012-01-01

    In this article, the author talks about the role and functions of a process server. The job of a process server is to hand deliver legal documents to the people involved in court cases. These legal documents range from a summons to appear in court to a subpoena for producing evidence. Process serving can involve risk, as some people take out their…

  19. Performance of a distributed superscalar storage server

    NASA Technical Reports Server (NTRS)

    Finestead, Arlan; Yeager, Nancy

    1993-01-01

    The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.

  20. Barcode server: a visualization-based genome analysis system.

    PubMed

    Mao, Fenglou; Olman, Victor; Wang, Yan; Xu, Ying

    2013-01-01

    We have previously developed a computational method for representing a genome as a barcode image, which makes various genomic features visually apparent. We have demonstrated that this visual capability has made some challenging genome analysis problems relatively easy to solve. We have applied this capability to a number of challenging problems, including (a) identification of horizontally transferred genes, (b) identification of genomic islands with special properties and (c) binning of metagenomic sequences, and achieved highly encouraging results. These application results inspired us to develop this barcode-based genome analysis server for public service, which supports the following capabilities: (a) calculation of the k-mer based barcode image for a provided DNA sequence; (b) detection of sequence fragments in a given genome with distinct barcodes from those of the majority of the genome, (c) clustering of provided DNA sequences into groups having similar barcodes; and (d) homology-based search using Blast against a genome database for any selected genomic regions deemed to have interesting barcodes. The barcode server provides a job management capability, allowing processing of a large number of analysis jobs for barcode-based comparative genome analyses. The barcode server is accessible at http://csbl1.bmb.uga.edu/Barcode. PMID:23457606

  1. SARA: a server for function annotation of RNA structures.

    PubMed

    Capriotti, Emidio; Marti-Renom, Marc A

    2009-07-01

    Recent interest in non-coding RNA transcripts has resulted in a rapid increase of deposited RNA structures in the Protein Data Bank. However, a characterization and functional classification of the RNA structure and function space have only been partially addressed. Here, we introduce the SARA program for pair-wise alignment of RNA structures as a web server for structure-based RNA function assignment. The SARA server relies on the SARA program, which aligns two RNA structures based on a unit-vector root-mean-square approach. The likely accuracy of the SARA alignments is assessed by three different P-values estimating the statistical significance of the sequence, secondary structure and tertiary structure identity scores, respectively. Our benchmarks, which relied on a set of 419 RNA structures with known SCOR structural class, indicate that at a negative logarithm of mean P-value higher or equal than 2.5, SARA can assign the correct or a similar SCOR class to 81.4% and 95.3% of the benchmark set, respectively. The SARA server is freely accessible via the World Wide Web at http://sgu.bioinfo.cipf.es/services/SARA/.

  2. EarthServer - 3D Visualization on the Web

    NASA Astrophysics Data System (ADS)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  3. CTserver: A Computational Thermodynamics Server for the Geoscience Community

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2006-12-01

    The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed

  4. BPROMPT: A consensus server for membrane protein prediction.

    PubMed

    Taylor, Paul D; Attwood, Teresa K; Flower, Darren R

    2003-07-01

    Protein structure prediction is a cornerstone of bioinformatics research. Membrane proteins require their own prediction methods due to their intrinsically different composition. A variety of tools exist for topology prediction of membrane proteins, many of them available on the Internet. The server described in this paper, BPROMPT (Bayesian PRediction Of Membrane Protein Topology), uses a Bayesian Belief Network to combine the results of other prediction methods, providing a more accurate consensus prediction. Topology predictions with accuracies of 70% for prokaryotes and 53% for eukaryotes were achieved. BPROMPT can be accessed at http://www.jenner.ac.uk/BPROMPT. PMID:12824397

  5. dnaMATE: a consensus melting temperature prediction server for short DNA sequences.

    PubMed

    Panjkovich, Alejandro; Norambuena, Tomás; Melo, Francisco

    2005-07-01

    An accurate and robust large-scale melting temperature prediction server for short DNA sequences is dispatched. The server calculates a consensus melting temperature value using the nearest-neighbor model based on three independent thermodynamic data tables. The consensus method gives an accurate prediction of melting temperature, as it has been recently demonstrated in a benchmark performed using all available experimental data for DNA sequences within the length range of 16-30 nt. This constitutes the first web server that has been implemented to perform a large-scale calculation of melting temperatures in real time (up to 5000 DNA sequences can be submitted in a single run). The expected accuracy of calculations carried out by this server in the range of 50-600 mM monovalent salt concentration is that 89% of the melting temperature predictions will have an error or deviation of <5 degrees C from experimental data. The server can be freely accessed at http://dna.bio.puc.cl/tm.html. The standalone executable versions of this software for LINUX, Macintosh and Windows platforms are also freely available at the same web site. Detailed further information supporting this server is available at the same web site referenced above.

  6. 3Drefine: an interactive web server for efficient protein structure refinement

    PubMed Central

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371

  7. The Argonne Voyager multimedia server

    SciTech Connect

    Disz, T.; Judson, I.; Olson, R.; Stevens, R.

    1997-07-01

    With the growing presence of multimedia-enabled systems, one will see an integration of collaborative computing concepts into the everyday environments of future scientific and technical workplaces. Desktop teleconferencing is in common use today, while more complex desktop teleconferencing technology that relies on the availability of multipoint (greater than two nodes) enabled tools is now starting to become available on PCs. A critical problem when using these collaboration tools is the inability to easily archive multistream, multipoint meetings and make the content available to others. Ideally one would like the ability to capture, record, playback, index, annotate and distribute multimedia stream data as easily as one currently handles text or still image data. While the ultimate goal is still some years away, the Argonne Voyager project is aimed at exploring and developing media server technology needed to provide a flexible virtual multipoint recording/playback capability. In this article the authors describe the motivating requirements, architecture implementation, operation, performance, and related work.

  8. Exploring a New Model for Preprint Server: A Case Study of CSPO

    ERIC Educational Resources Information Center

    Hu, Changping; Zhang, Yaokun; Chen, Guo

    2010-01-01

    This paper describes the introduction of an open-access preprint server in China covering 43 disciplines. The system includes mandatory deposit for state-funded research and reports on the repository and its effectiveness and outlines a novel process of peer-review of preprints in the repository, which can be incorporated into the established…

  9. An efficient biometric and password-based remote user authentication using smart card for Telecare Medical Information Systems in multi-server environment.

    PubMed

    Maitra, Tanmoy; Giri, Debasis

    2014-12-01

    The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost.

  10. An efficient biometric and password-based remote user authentication using smart card for Telecare Medical Information Systems in multi-server environment.

    PubMed

    Maitra, Tanmoy; Giri, Debasis

    2014-12-01

    The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost. PMID:25371272

  11. PlanetServer: Innovative approaches for the online analysis of hyperspectral satellite data from Mars

    NASA Astrophysics Data System (ADS)

    Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.

    2014-06-01

    PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.

  12. Providing web servers and training in Bioinformatics: 2010 update on the Bioinformatics Links Directory.

    PubMed

    Brazas, Michelle D; Yamada, Joseph T; Ouellette, B F Francis

    2010-07-01

    The Links Directory at Bioinformatics.ca continues its collaboration with Nucleic Acids Research to jointly publish and compile a freely accessible, online collection of tools, databases and resource materials for bioinformatics and molecular biology research. The July 2010 Web Server issue of Nucleic Acids Research adds an additional 115 web server tools and 7 updates to the directory at http://bioinformatics.ca/links_directory/, bringing the total number of servers listed close to an impressive 1500 links. The Bioinformatics Links Directory represents an excellent community resource for locating bioinformatic tools and databases to aid one's research, and in this context bioinformatic education needs and initiatives are discussed. A complete list of all links featured in this Nucleic Acids Research 2010 Web Server issue can be accessed online at http://bioinformatics.ca/links_directory/narweb2010/. The 2010 update of the Bioinformatics Links Directory, which includes the Web Server list and summaries, is also available online at the Nucleic Acids Research website, http://nar.oxfordjournals.org/.

  13. How Public Is the Web?: Robots, Access, and Scholarly Communication.

    ERIC Educational Resources Information Center

    Snyder, Herbert; Rosenbaum, Howard

    1998-01-01

    Examines the use of Robot Exclusion Protocol (REP) to restrict the access of search engine robots to 10 major United States university Web sites. An analysis of Web site searching and interviews with Web server administrators shows that the decision to use this procedure is largely technical and is typically made by the Web server administrator.…

  14. Web server with ATMEGA 2560 microcontroller

    NASA Astrophysics Data System (ADS)

    Răduca, E.; Ungureanu-Anghel, D.; Nistor, L.; Haţiegan, C.; Drăghici, S.; Chioncel, C.; Spunei, E.; Lolea, R.

    2016-02-01

    This paper presents the design and building of a Web Server to command, control and monitor at a distance lots of industrial or personal equipments and/or sensors. The server works based on a personal software. The software can be written by users and can work with many types of operating system. The authors were realized the Web server based on two platforms, an UC board and a network board. The source code was written in "open source" language Arduino 1.0.5.

  15. The Matpar Server on the HP Exemplar

    NASA Technical Reports Server (NTRS)

    Springer, Paul

    2000-01-01

    This presentation reviews the design of Matlab for parallel processing on a parallel system. Matlab was found to be too slow on many large problems, and with the Next Generation Space Telescope requiring greater capability, the work was begun in early 1996 on parallel extensions to Matlab, called Matpar. This presentation reviews the architecture, the functionality, and the design of MatPar. The design utilizes a client server strategy, with the client code written in C, and the object-oriented server code written in C++. The client/server approach for Matpar provides ease of use an good speed.

  16. Integrating climate data management and access with the Unified Access Framework, a GEO-IDE project

    NASA Astrophysics Data System (ADS)

    O'Brien, K.; Casey, K. S.; Habermann, T.; Hankin, S. C.; McCulloch, L.; McDonald, K. R.; Mendelssohn, R.; Rutledge, G. K.; Signell, R. P.

    2010-12-01

    Insufficiently integrated data management and access systems are a major problem that data managers, scientists and users encounter when trying to serve, locate or use climate data. This situation is a reflection of technology management and decision-making strategies of the past that have tended to fragment data management, rather than to unify it. Lines of funding have traditionally been matched to observing systems: satellites, ships, etc. and data life cycle phases: collection/measurement, real-time applications, climate analysis, archive, etc. Data management has been considered to be "owned" by the observing system element or the function. Unfortunately, this fragmented approach to data management promotes individualized solutions, often resulting in the creation of non-interoperable data formats and protocols. In this presentation, we will be showcasing how the UAF project, implementing several current de facto standards, is attempting to overcome the hindrances of non-integrated data management and access. The standards involved include netCDF, which provides the abstract data model, software libraries and a persistent binary format; the Climate and Forecast (CF) metadata conventions; the OPeNDAP protocol for web transport of data subsets; THREDDS XML catalogs which provide a distributed topology connecting data suppliers; and an OGC compatibility layer that provides access to the grids through WMS and WCS. We will be discussing the efforts to create a single-entry catalog showcasing vast amounts of data resources, from government as well as non-government sources. We’ll also be discussing the array of clients which are able to tap into this vast catalog and deliver data and data products seamlessly to the user, including Live Access Server (LAS), Environmental Research Division's Data Access Program (ERDDAP), Matlab and the Repository for Archiving, Managing and Accessing Diverse Data (RAMADDA).

  17. The Widest Practicable Dissemination: The NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to "provide for the widest practicable and appropriate dissemination of information concerning [...] its activities and the results thereof." The search for innovative methods to distribute NASA s information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial 6-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  18. The widest practicable dissemination: The NASA technical report server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to 'provide for the widest practicable and appropriate dissemination of information concerning...its activities and the results thereof.' The search for innovative methods to distribute NASA's information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial six-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  19. The FELICIA bulletin board system and the IRBIS anonymous FTP server: Computer security information sources for the DOE community. CIAC-2302

    SciTech Connect

    Orvis, W.J.

    1993-11-03

    The Computer Incident Advisory Capability (CIAC) operates two information servers for the DOE community, FELICIA (formerly FELIX) and IRBIS. FELICIA is a computer Bulletin Board System (BBS) that can be accessed by telephone with a modem. IRBIS is an anonymous ftp server that can be accessed on the Internet. Both of these servers contain all of the publicly available CIAC, CERT, NIST, and DDN bulletins, virus descriptions, the VIRUS-L moderated virus bulletin board, copies of public domain and shareware virus- detection/protection software, and copies of useful public domain and shareware utility programs. This guide describes how to connect these systems and obtain files from them.

  20. The network-enabled optimization system server

    SciTech Connect

    Mesnier, M.P.

    1995-08-01

    Mathematical optimization is a technology under constant change and advancement, drawing upon the most efficient and accurate numerical methods to date. Further, these methods can be tailored for a specific application or generalized to accommodate a wider range of problems. This perpetual change creates an ever growing field, one that is often difficult to stay abreast of. Hence, the impetus behind the Network-Enabled Optimization System (NEOS) server, which aims to provide users, both novice and expert, with a guided tour through the expanding world of optimization. The NEOS server is responsible for bridging the gap between users and the optimization software they seek. More specifically, the NEOS server will accept optimization problems over the Internet and return a solution to the user either interactively or by e-mail. This paper discusses the current implementation of the server.

  1. EarthServer: Information Retrieval and Query Language

    NASA Astrophysics Data System (ADS)

    Perperis, Thanassis; Koltsida, Panagiota; Kakaletris, George

    2013-04-01

    Establishing open, unified, seamless, access and ad-hoc analytics on cross-disciplinary, multi-source, multi-dimensional, spatiotemporal Earth Science data of extreme-size and their supporting metadata are the main challenges of the EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program. One of EarthServer's main objectives is to provide users with higher level coverage and metadata search, retrieval and processing capabilities to multi-disciplinary Earth Science data. Six Lighthouse Applications are being established, each one providing access to Cryospheric, Airborne, Atmospheric, Geology, Oceanography and Planetary science raster data repositories through strictly WCS 2.0 standard based service endpoints. EarthServers' information retrieval subsystem aims towards exploiting the WCS endpoints through a physically and logically distributed service oriented architecture, foreseeing the collaboration of several standard compliant services, capable of exploiting modern large grid and cloud infrastructures and of dynamically responding to availability and capabilities of underlying resources. Towards furthering technology for integrated, coherent service provision based on WCS and WCPS the concept of a query language (QL), unifying coverage and metadata processing and retrieval is introduced. EarthServer's information retrieval subsystem receives QL requests involving high volumes of all Earth Science data categories, executes them on the services that reside on the infrastructure and sends the results back to the requester through a high performance pipeline. In this contribution we briefly discuss EarthServer's service oriented coverage data and metadata search and retrieval architecture and further elaborate on the potentials of EarthServer's Query Language, called xWCPS (XQuery compliant WCPS). xWCPS aims towards merging the path that the two widely adopted standards (W3C XQuery, OGC WCPS) have paved, into a

  2. RCD+: Fast loop modeling server.

    PubMed

    López-Blanco, José Ramón; Canosa-Valls, Alejandro Jesús; Li, Yaohang; Chacón, Pablo

    2016-07-01

    Modeling loops is a critical and challenging step in protein modeling and prediction. We have developed a quick online service (http://rcd.chaconlab.org) for ab initio loop modeling combining a coarse-grained conformational search with a full-atom refinement. Our original Random Coordinate Descent (RCD) loop closure algorithm has been greatly improved to enrich the sampling distribution towards near-native conformations. These improvements include a new workflow optimization, MPI-parallelization and fast backbone angle sampling based on neighbor-dependent Ramachandran probability distributions. The server starts by efficiently searching the vast conformational space from only the loop sequence information and the environment atomic coordinates. The generated closed loop models are subsequently ranked using a fast distance-orientation dependent energy filter. Top ranked loops are refined with the Rosetta energy function to obtain accurate all-atom predictions that can be interactively inspected in an user-friendly web interface. Using standard benchmarks, the average root mean squared deviation (RMSD) is 0.8 and 1.4 Å for 8 and 12 residues loops, respectively, in the challenging modeling scenario in where the side chains of the loop environment are fully remodeled. These results are not only very competitive compared to those obtained with public state of the art methods, but also they are obtained ∼10-fold faster. PMID:27151199

  3. RCD+: Fast loop modeling server

    PubMed Central

    López-Blanco, José Ramón; Canosa-Valls, Alejandro Jesús; Li, Yaohang; Chacón, Pablo

    2016-01-01

    Modeling loops is a critical and challenging step in protein modeling and prediction. We have developed a quick online service (http://rcd.chaconlab.org) for ab initio loop modeling combining a coarse-grained conformational search with a full-atom refinement. Our original Random Coordinate Descent (RCD) loop closure algorithm has been greatly improved to enrich the sampling distribution towards near-native conformations. These improvements include a new workflow optimization, MPI-parallelization and fast backbone angle sampling based on neighbor-dependent Ramachandran probability distributions. The server starts by efficiently searching the vast conformational space from only the loop sequence information and the environment atomic coordinates. The generated closed loop models are subsequently ranked using a fast distance-orientation dependent energy filter. Top ranked loops are refined with the Rosetta energy function to obtain accurate all-atom predictions that can be interactively inspected in an user-friendly web interface. Using standard benchmarks, the average root mean squared deviation (RMSD) is 0.8 and 1.4 Å for 8 and 12 residues loops, respectively, in the challenging modeling scenario in where the side chains of the loop environment are fully remodeled. These results are not only very competitive compared to those obtained with public state of the art methods, but also they are obtained ∼10-fold faster. PMID:27151199

  4. Using servers to enhance control system capability

    SciTech Connect

    M. Bickley; B.A. Bowling; D.A. Bryan; J. van Zeijts; K.S. White; S. Witherspoon

    1999-03-01

    Many traditional control systems include a distributed collection of front end machines to control hardware. Back end tools are used to view, modify and record the signals generated by these front end machines. Software servers, which are a middleware layer between the front and back ends, can improve a control system in several ways. Servers can enable on-line processing of raw data, and consolidation of functionality. In many cases, data retrieved from the front end must be processed in order to convert the raw data into useful information. These calculations are often redundantly performed by different programs, frequently offline. Servers can monitor the raw data and rapidly perform calculations, producing new signals which can be treated like any other control system signal, and can be used by any back end application. Algorithms can be incorporated to actively modify signal values in the control system based upon changes of other signals, essentially producing feedback in a control system. Servers thus increase the flexibility of a control system. Lastly, servers running on inexpensive UNIX workstations can relay or cache frequently needed information, reducing the load on front end hardware by functioning as concentrators. Rather than many back end tools connecting directly to the front end machines, increasing the work load of these machines, they instead connect to the server. Servers like those discussed above have been used successfully at the Thomas Jefferson National Accelerator Facility to provide functionality such as beam steering, fault monitoring, storage of machine parameters, and on-line data processing. The authors discuss the potential uses of such servers, and share the results of work performed to date.

  5. USING SERVERS TO ENHANCE CONTROL SYSTEM CAPABILITY.

    SciTech Connect

    BICKLEY,M.; BOWLING,B.A.; BRYAN,D.A.; ZEIJTS,J.; WHITE,K.S.; WITHERSPOON,S.

    1999-03-29

    Many traditional control systems include a distributed collection of front end machines to control hardware. Back end tools are used to view, modify, and record the signals generated by these front end machines. Software servers, which are a middleware layer between the front and back ends, can improve a control system in several ways. Servers can enable on-line processing of raw data, and consolidation of functionality. In many cases data retrieved from the front end must be processed in order to convert the raw data into useful information. These calculations are often redundantly performed by different programs, frequently offline. Servers can monitor the raw data and rapidly perform calculations, producing new signals which can be treated like any other control system signal, and can be used by any back end application. Algorithms can be incorporated to actively modify signal values in the control system based upon changes of other signals, essentially producing feedback in a control system. Servers thus increase the flexibility of a control system. Lastly, servers running on inexpensive UNIX workstations can relay or cache frequently needed information, reducing the load on front end hardware by functioning as concentrators. Rather than many back end tools connecting directly to the front end machines, increasing the work load of these machines, they instead connect to the server. Servers like those discussed above have been used successfully at the Thomas Jefferson National Accelerator Facility to provide functionality such as beam steering, fault monitoring, storage of machine parameters, and on-line data processing. The authors discuss the potential uses of such, servers, and share the results of work performed to date.

  6. Providing Internet Access to High-Resolution Lunar Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  7. The PhyloPythiaS Web Server for Taxonomic Assignment of Metagenome Sequences

    PubMed Central

    Patil, Kaustubh Raosaheb; Roune, Linus; McHardy, Alice Carolyn

    2012-01-01

    Metagenome sequencing is becoming common and there is an increasing need for easily accessible tools for data analysis. An essential step is the taxonomic classification of sequence fragments. We describe a web server for the taxonomic assignment of metagenome sequences with PhyloPythiaS. PhyloPythiaS is a fast and accurate sequence composition-based classifier that utilizes the hierarchical relationships between clades. Taxonomic assignments with the web server can be made with a generic model, or with sample-specific models that users can specify and create. Several interactive visualization modes and multiple download formats allow quick and convenient analysis and downstream processing of taxonomic assignments. Here, we demonstrate usage of our web server by taxonomic assignment of metagenome samples from an acidophilic biofilm community of an acid mine and of a microbial community from cow rumen. PMID:22745671

  8. The EROS-2 Light Curve Server (ELCS), a tool for stellar astrophysics and some associated results

    NASA Astrophysics Data System (ADS)

    Marquette, J. B.; Lesquoy, É.; Le Fèvre, J. P.; Tisserand, P.; Beaulieu, J. P.; Milsztajn, A.

    2007-07-01

    Between July 1996 and March 2003 the EROS-2 (Expérience de Recherche d'Objets Sombres) collaboration has conducted a large photometric survey mainly towards the Magellanic Clouds and the Galactic centre, in order to detect the baryonic dark matter in the Halo by microlensing effect. If it is now recognized that massive compact objects are not the major component of the Halo, tens of millions of light curves are now available for the stellar community. A specific server hosted by a CEA machine has been developped, in order to open a public access to these data. In its present configuration the so-called ELCS server contains more than 32 millions of Magellanic Clouds objects. The server is presented together with recent results of data mining in the EROS-2 database, including the detection of the first R Coronae Borealis stars in the SMC and a systematic search of double mode objects.

  9. The RNAz web server: prediction of thermodynamically stable and evolutionarily conserved RNA structures.

    PubMed

    Gruber, Andreas R; Neuböck, Richard; Hofacker, Ivo L; Washietl, Stefan

    2007-07-01

    Many non-coding RNA genes and cis-acting regulatory elements of mRNAs contain RNA secondary structures that are critical for their function. Such functional RNAs can be predicted on the basis of thermodynamic stability and evolutionary conservation. We present a web server that uses the RNAz algorithm to detect functional RNA structures in multiple alignments of nucleotide sequences. The server provides access to a complete and fully automatic analysis pipeline that allows not only to analyze single alignments in a variety of formats, but also to conduct complex screens of large genomic regions. Results are presented on a website that is illustrated by various structure representations and can be downloaded for local view. The web server is available at: rna.tbi.univie.ac.at/RNAz.

  10. The PhyloPythiaS web server for taxonomic assignment of metagenome sequences.

    PubMed

    Patil, Kaustubh Raosaheb; Roune, Linus; McHardy, Alice Carolyn

    2012-01-01

    Metagenome sequencing is becoming common and there is an increasing need for easily accessible tools for data analysis. An essential step is the taxonomic classification of sequence fragments. We describe a web server for the taxonomic assignment of metagenome sequences with PhyloPythiaS. PhyloPythiaS is a fast and accurate sequence composition-based classifier that utilizes the hierarchical relationships between clades. Taxonomic assignments with the web server can be made with a generic model, or with sample-specific models that users can specify and create. Several interactive visualization modes and multiple download formats allow quick and convenient analysis and downstream processing of taxonomic assignments. Here, we demonstrate usage of our web server by taxonomic assignment of metagenome samples from an acidophilic biofilm community of an acid mine and of a microbial community from cow rumen.

  11. Secure data aggregation in heterogeneous and disparate networks using stand off server architecture

    NASA Astrophysics Data System (ADS)

    Vimalathithan, S.; Sudarsan, S. D.; Seker, R.; Lenin, R. B.; Ramaswamy, S.

    2009-04-01

    The emerging global reach of technology presents myriad challenges and intricacies as Information Technology teams aim to provide anywhere, anytime and anyone access, for service providers and customers alike. The world is fraught with stifling inequalities, both from an economic as well as socio-political perspective. The net result has been large capability gaps between various organizational locations that need to work together, which has raised new challenges for information security teams. Similar issues arise, when mergers and acquisitions among and between organizations take place. While integrating remote business locations with mainstream operations, one or more of the issues including the lack of application level support, computational capabilities, communication limitations, and legal requirements cause a serious impediment thereby complicating integration while not violating the organizations' security requirements. Often resorted techniques like IPSec, tunneling, secure socket layer, etc. may not be always techno-economically feasible. This paper addresses such security issues by introducing an intermediate server between corporate central server and remote sites, called stand-off-server. We present techniques such as break-before-make connection, break connection after transfer, multiple virtual machine instances with different operating systems using the concept of a stand-off-server. Our experiments show that the proposed solution provides sufficient isolation for the central server/site from attacks arising out of weak communication and/or computing links and is simple to implement.

  12. VfoldCPX Server: Predicting RNA-RNA Complex Structure and Stability

    PubMed Central

    Xu, Xiaojun; Chen, Shi-Jie

    2016-01-01

    RNA-RNA interactions are essential for genomic RNA dimerization, mRNA splicing, and many RNA-related gene expression and regulation processes. The prediction of the structure and folding stability of RNA-RNA complexes is a problem of significant biological importance and receives substantial interest in the biological community. The VfoldCPX server provides a new web interface to predict the two-dimensional (2D) structures of RNA-RNA complexes from the nucleotide sequences. The VfoldCPX server has several novel advantages including the ability to treat RNAs with tertiary contacts (crossing base pairs) such as loop-loop kissing interactions and the use of physical loop entropy parameters. Based on a partition function-based algorithm, the server enables prediction for structure with and without tertiary contacts. Furthermore, the server outputs a set of energetically stable structures, ranked by their stabilities. The results allow users to gain extensive physical insights into RNA-RNA interactions and their roles in RNA function. The web server is freely accessible at “http://rna.physics.missouri.edu/vfoldCPX”. PMID:27657918

  13. MAP(2.0)3D: a sequence/structure based server for protein engineering.

    PubMed

    Verma, Rajni; Schwaneberg, Ulrich; Roccatano, Danilo

    2012-04-20

    The Mutagenesis Assistant Program (MAP) is a web-based tool to provide statistical analyses of the mutational biases of directed evolution experiments on amino acid substitution patterns. MAP analysis assists protein engineers in the benchmarking of random mutagenesis methods that generate single nucleotide mutations in a codon. Herein, we describe a completely renewed and improved version of the MAP server, the MAP(2.0)3D server, which correlates the generated amino acid substitution patterns to the structural information of the target protein. This correlation aids in the selection of a more suitable random mutagenesis method with specific biases on amino acid substitution patterns. In particular, the new server represents MAP indicators on secondary and tertiary structure and correlates them to specific structural components such as hydrogen bonds, hydrophobic contacts, salt bridges, solvent accessibility, and crystallographic B-factors. Three model proteins (D-amino oxidase, phytase, and N-acetylneuraminic acid aldolase) are used to illustrate the novel capability of the server. MAP(2.0)3D server is available publicly at http://map.jacobs-university.de/map3d.html.

  14. MISTIC: Mutual information server to infer coevolution.

    PubMed

    Simonetti, Franco L; Teppa, Elin; Chernomoretz, Ariel; Nielsen, Morten; Marino Buslje, Cristina

    2013-07-01

    MISTIC (mutual information server to infer coevolution) is a web server for graphical representation of the information contained within a MSA (multiple sequence alignment) and a complete analysis tool for Mutual Information networks in protein families. The server outputs a graphical visualization of several information-related quantities using a circos representation. This provides an integrated view of the MSA in terms of (i) the mutual information (MI) between residue pairs, (ii) sequence conservation and (iii) the residue cumulative and proximity MI scores. Further, an interactive interface to explore and characterize the MI network is provided. Several tools are offered for selecting subsets of nodes from the network for visualization. Node coloring can be set to match different attributes, such as conservation, cumulative MI, proximity MI and secondary structure. Finally, a zip file containing all results can be downloaded. The server is available at http://mistic.leloir.org.ar. In summary, MISTIC allows for a comprehensive, compact, visually rich view of the information contained within an MSA in a manner unique to any other publicly available web server. In particular, the use of circos representation of MI networks and the visualization of the cumulative MI and proximity MI concepts is novel.

  15. Swiss EMBnet node web server.

    PubMed

    Falquet, Laurent; Bordoli, Lorenza; Ioannidis, Vassilios; Pagni, Marco; Jongeneel, C Victor

    2003-07-01

    EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a 'node', a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets biomedical scientists in Switzerland and elsewhere, offering them access to a collection of important sequence analysis tools mirrored from other sites or developed locally. We describe here the Swiss EMBnet node web site (http://www.ch.embnet.org), which presents a number of original services not available anywhere else.

  16. RaptorX-Property: a web server for protein structure property prediction.

    PubMed

    Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo

    2016-07-01

    RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction.

  17. RaptorX-Property: a web server for protein structure property prediction

    PubMed Central

    Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo

    2016-01-01

    RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence–structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction. PMID:27112573

  18. RaptorX-Property: a web server for protein structure property prediction.

    PubMed

    Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo

    2016-07-01

    RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction. PMID:27112573

  19. System level traffic shaping in disk servers with heterogeneous protocols

    NASA Astrophysics Data System (ADS)

    Cano, Eric; Kruse, Daniele Francesco

    2014-06-01

    Disk access and tape migrations compete for network bandwidth in CASTORs disk servers, over various protocols: RFIO, Xroot, root and GridFTP. As there are a limited number of tape drives, it is important to keep them busy all the time, at their nominal speed. With potentially 100s of user read streams per server, the bandwidth for the tape migrations has to be guaranteed to a controlled level, and not the fair share the system gives by default. Xroot provides a prioritization mechanism, but using it implies moving exclusively to the Xroot protocol, which is not possible in short to mid-term time frame, as users are equally using all protocols. The greatest commonality of all those protocols is not more than the usage of TCP/IP. We investigated the Linux kernel traffic shaper to control TCP/ IP bandwidth. The performance and limitations of the traffic shaper have been understood in test environment, and satisfactory working point has been found for production. Notably, TCP offload engines' negative impact on traffic shaping, and the limitations of the length of the traffic shaping rules were discovered and measured. A suitable working point has been found and the traffic shaping is now successfully deployed in the CASTOR production systems at CERN. This system level approach could be transposed easily to other environments.

  20. Engineering Proteins for Thermostability with iRDP Web Server

    PubMed Central

    Ghanate, Avinash; Ramasamy, Sureshkumar; Suresh, C. G.

    2015-01-01

    Engineering protein molecules with desired structure and biological functions has been an elusive goal. Development of industrially viable proteins with improved properties such as stability, catalytic activity and altered specificity by modifying the structure of an existing protein has widely been targeted through rational protein engineering. Although a range of factors contributing to thermal stability have been identified and widely researched, the in silico implementation of these as strategies directed towards enhancement of protein stability has not yet been explored extensively. A wide range of structural analysis tools is currently available for in silico protein engineering. However these tools concentrate on only a limited number of factors or individual protein structures, resulting in cumbersome and time-consuming analysis. The iRDP web server presented here provides a unified platform comprising of iCAPS, iStability and iMutants modules. Each module addresses different facets of effective rational engineering of proteins aiming towards enhanced stability. While iCAPS aids in selection of target protein based on factors contributing to structural stability, iStability uniquely offers in silico implementation of known thermostabilization strategies in proteins for identification and stability prediction of potential stabilizing mutation sites. iMutants aims to assess mutants based on changes in local interaction network and degree of residue conservation at the mutation sites. Each module was validated using an extensively diverse dataset. The server is freely accessible at http://irdp.ncl.res.in and has no login requirements. PMID:26436543

  1. Engineering Proteins for Thermostability with iRDP Web Server.

    PubMed

    Panigrahi, Priyabrata; Sule, Manas; Ghanate, Avinash; Ramasamy, Sureshkumar; Suresh, C G

    2015-01-01

    Engineering protein molecules with desired structure and biological functions has been an elusive goal. Development of industrially viable proteins with improved properties such as stability, catalytic activity and altered specificity by modifying the structure of an existing protein has widely been targeted through rational protein engineering. Although a range of factors contributing to thermal stability have been identified and widely researched, the in silico implementation of these as strategies directed towards enhancement of protein stability has not yet been explored extensively. A wide range of structural analysis tools is currently available for in silico protein engineering. However these tools concentrate on only a limited number of factors or individual protein structures, resulting in cumbersome and time-consuming analysis. The iRDP web server presented here provides a unified platform comprising of iCAPS, iStability and iMutants modules. Each module addresses different facets of effective rational engineering of proteins aiming towards enhanced stability. While iCAPS aids in selection of target protein based on factors contributing to structural stability, iStability uniquely offers in silico implementation of known thermostabilization strategies in proteins for identification and stability prediction of potential stabilizing mutation sites. iMutants aims to assess mutants based on changes in local interaction network and degree of residue conservation at the mutation sites. Each module was validated using an extensively diverse dataset. The server is freely accessible at http://irdp.ncl.res.in and has no login requirements.

  2. Providing Internet Access to High-Resolution Mars Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  3. Network time synchronization servers at the US Naval Observatory

    NASA Technical Reports Server (NTRS)

    Schmidt, Richard E.

    1995-01-01

    Responding to an increased demand for reliable, accurate time on the Internet and Milnet, the U.S. Naval Observatory Time Service has established the network time servers, tick.usno.navy.mil and tock.usno.navy.mil. The system clocks of these HP9000/747i industrial work stations are synchronized to within a few tens of microseconds of USNO Master Clock 2 using VMEbus IRIG-B interfaces. Redundant time code is available from a VMEbus GPS receiver. UTC(USNO) is provided over the network via a number of protocols, including the Network Time Protocol (NTP) (DARPA Network Working Group Report RFC-1305), the Daytime Protocol (RFC-867), and the Time protocol (RFC-868). Access to USNO network time services is presently open and unrestricted. An overview of USNO time services and results of LAN and WAN time synchronization tests will be presented.

  4. SIFTER search: a web server for accurate phylogeny-based protein function prediction.

    PubMed

    Sahraeian, Sayed M; Luo, Kevin R; Brenner, Steven E

    2015-07-01

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.

  5. mtDNA-Server: next-generation sequencing data analysis of human mitochondrial DNA in the cloud

    PubMed Central

    Weissensteiner, Hansi; Forer, Lukas; Fuchsberger, Christian; Schöpf, Bernd; Kloss-Brandstätter, Anita; Specht, Günther; Kronenberg, Florian; Schönherr, Sebastian

    2016-01-01

    Next generation sequencing (NGS) allows investigating mitochondrial DNA (mtDNA) characteristics such as heteroplasmy (i.e. intra-individual sequence variation) to a higher level of detail. While several pipelines for analyzing heteroplasmies exist, issues in usability, accuracy of results and interpreting final data limit their usage. Here we present mtDNA-Server, a scalable web server for the analysis of mtDNA studies of any size with a special focus on usability as well as reliable identification and quantification of heteroplasmic variants. The mtDNA-Server workflow includes parallel read alignment, heteroplasmy detection, artefact or contamination identification, variant annotation as well as several quality control metrics, often neglected in current mtDNA NGS studies. All computational steps are parallelized with Hadoop MapReduce and executed graphically with Cloudgene. We validated the underlying heteroplasmy and contamination detection model by generating four artificial sample mix-ups on two different NGS devices. Our evaluation data shows that mtDNA-Server detects heteroplasmies and artificial recombinations down to the 1% level with perfect specificity and outperforms existing approaches regarding sensitivity. mtDNA-Server is currently able to analyze the 1000G Phase 3 data (n = 2,504) in less than 5 h and is freely accessible at https://mtdna-server.uibk.ac.at. PMID:27084948

  6. mtDNA-Server: next-generation sequencing data analysis of human mitochondrial DNA in the cloud.

    PubMed

    Weissensteiner, Hansi; Forer, Lukas; Fuchsberger, Christian; Schöpf, Bernd; Kloss-Brandstätter, Anita; Specht, Günther; Kronenberg, Florian; Schönherr, Sebastian

    2016-07-01

    Next generation sequencing (NGS) allows investigating mitochondrial DNA (mtDNA) characteristics such as heteroplasmy (i.e. intra-individual sequence variation) to a higher level of detail. While several pipelines for analyzing heteroplasmies exist, issues in usability, accuracy of results and interpreting final data limit their usage. Here we present mtDNA-Server, a scalable web server for the analysis of mtDNA studies of any size with a special focus on usability as well as reliable identification and quantification of heteroplasmic variants. The mtDNA-Server workflow includes parallel read alignment, heteroplasmy detection, artefact or contamination identification, variant annotation as well as several quality control metrics, often neglected in current mtDNA NGS studies. All computational steps are parallelized with Hadoop MapReduce and executed graphically with Cloudgene. We validated the underlying heteroplasmy and contamination detection model by generating four artificial sample mix-ups on two different NGS devices. Our evaluation data shows that mtDNA-Server detects heteroplasmies and artificial recombinations down to the 1% level with perfect specificity and outperforms existing approaches regarding sensitivity. mtDNA-Server is currently able to analyze the 1000G Phase 3 data (n = 2,504) in less than 5 h and is freely accessible at https://mtdna-server.uibk.ac.at. PMID:27084948

  7. PiRaNhA: a server for the computational prediction of RNA-binding residues in protein sequences

    PubMed Central

    Murakami, Yoichi; Spriggs, Ruth V.; Nakamura, Haruki; Jones, Susan

    2010-01-01

    The PiRaNhA web server is a publicly available online resource that automatically predicts the location of RNA-binding residues (RBRs) in protein sequences. The goal of functional annotation of sequences in the field of RNA binding is to provide predictions of high accuracy that require only small numbers of targeted mutations for verification. The PiRaNhA server uses a support vector machine (SVM), with position-specific scoring matrices, residue interface propensity, predicted residue accessibility and residue hydrophobicity as features. The server allows the submission of up to 10 protein sequences, and the predictions for each sequence are provided on a web page and via email. The prediction results are provided in sequence format with predicted RBRs highlighted, in text format with the SVM threshold score indicated and as a graph which enables users to quickly identify those residues above any specific SVM threshold. The graph effectively enables the increase or decrease of the false positive rate. When tested on a non-redundant data set of 42 protein sequences not used in training, the PiRaNhA server achieved an accuracy of 85%, specificity of 90% and a Matthews correlation coefficient of 0.41 and outperformed other publicly available servers. The PiRaNhA prediction server is freely available at http://www.bioinformatics.sussex.ac.uk/PIRANHA. PMID:20507911

  8. Network characteristics for server selection in online games

    NASA Astrophysics Data System (ADS)

    Claypool, Mark

    2008-01-01

    Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.

  9. Implementing bioinformatic workflows within the bioextract server

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  10. Client/Server Architecture Promises Radical Changes.

    ERIC Educational Resources Information Center

    Freeman, Grey; York, Jerry

    1991-01-01

    This article discusses the emergence of the client/server paradigm for the delivery of computer applications, its emergence in response to the proliferation of microcomputers and local area networks, the applicability of the model in academic institutions, and its implications for college campus information technology organizations. (Author/DB)

  11. Serving database information using a flexible server in a three tier architecture

    SciTech Connect

    Lee Lueking et al.

    2003-08-11

    The D0 experiment at Fermilab relies on a central Oracle database for storing all detector calibration information. Access to this data is needed by hundreds of physics applications distributed worldwide. In order to meet the demands of these applications from scarce resources, we have created a distributed system that isolates the user applications from the database facilities. This system, known as the Database Application Network (DAN) operates as the middle tier in a three tier architecture. A DAN server employs a hierarchical caching scheme and database connection management facility that limits access to the database resource. The modular design allows for caching strategies and database access components to be determined by runtime configuration. To solve scalability problems, a proxy database component allows for DAN servers to be arranged in a hierarchy. Also included is an event based monitoring system that is currently being used to collect statistics for performance analysis and problem diagnosis. DAN servers are currently implemented as a Python multithreaded program using CORBA for network communications and interface specification. The requirement details, design, and implementation of DAN are discussed along with operational experience and future plans.

  12. Workload Characterization and Performance Implications of Large-Scale Blog Servers

    SciTech Connect

    Jeon, Myeongjae; Kim, Youngjae; Hwang, Jeaho; Lee, Joonwon; Seo, Euiseong

    2012-11-01

    With the ever-increasing popularity of social network services (SNSs), an understanding of the characteristics of these services and their effects on the behavior of their host servers is critical. However, there has been a lack of research on the workload characterization of servers running SNS applications such as blog services. To fill this void, we empirically characterized real-world web server logs collected from one of the largest South Korean blog hosting sites for 12 consecutive days. The logs consist of more than 96 million HTTP requests and 4.7 TB of network traffic. Our analysis reveals the followings: (i) The transfer size of non-multimedia files and blog articles can be modeled using a truncated Pareto distribution and a log-normal distribution, respectively; (ii) User access for blog articles does not show temporal locality, but is strongly biased towards those posted with image or audio files. We additionally discuss the potential performance improvement through clustering of small files on a blog page into contiguous disk blocks, which benefits from the observed file access patterns. Trace-driven simulations show that, on average, the suggested approach achieves 60.6% better system throughput and reduces the processing time for file access by 30.8% compared to the best performance of the Ext4 file system.

  13. HMMER web server: 2015 update.

    PubMed

    Finn, Robert D; Clements, Jody; Arndt, William; Miller, Benjamin L; Wheeler, Travis J; Schreiber, Fabian; Bateman, Alex; Eddy, Sean R

    2015-07-01

    The HMMER website, available at http://www.ebi.ac.uk/Tools/hmmer/, provides access to the protein homology search algorithms found in the HMMER software suite. Since the first release of the website in 2011, the search repertoire has been expanded to include the iterative search algorithm, jackhmmer. The continued growth of the target sequence databases means that traditional tabular representations of significant sequence hits can be overwhelming to the user. Consequently, additional ways of presenting homology search results have been developed, allowing them to be summarised according to taxonomic distribution or domain architecture. The taxonomy and domain architecture representations can be used in combination to filter the results according to the needs of a user. Searches can also be restricted prior to submission using a new taxonomic filter, which not only ensures that the results are specific to the requested taxonomic group, but also improves search performance. The repertoire of profile hidden Markov model libraries, which are used for annotation of query sequences with protein families and domains, has been expanded to include the libraries from CATH-Gene3D, PIRSF, Superfamily and TIGRFAMs. Finally, we discuss the relocation of the HMMER webserver to the European Bioinformatics Institute and the potential impact that this will have.

  14. HMMER web server: 2015 update

    PubMed Central

    Finn, Robert D.; Clements, Jody; Arndt, William; Miller, Benjamin L.; Wheeler, Travis J.; Schreiber, Fabian; Bateman, Alex; Eddy, Sean R.

    2015-01-01

    The HMMER website, available at http://www.ebi.ac.uk/Tools/hmmer/, provides access to the protein homology search algorithms found in the HMMER software suite. Since the first release of the website in 2011, the search repertoire has been expanded to include the iterative search algorithm, jackhmmer. The continued growth of the target sequence databases means that traditional tabular representations of significant sequence hits can be overwhelming to the user. Consequently, additional ways of presenting homology search results have been developed, allowing them to be summarised according to taxonomic distribution or domain architecture. The taxonomy and domain architecture representations can be used in combination to filter the results according to the needs of a user. Searches can also be restricted prior to submission using a new taxonomic filter, which not only ensures that the results are specific to the requested taxonomic group, but also improves search performance. The repertoire of profile hidden Markov model libraries, which are used for annotation of query sequences with protein families and domains, has been expanded to include the libraries from CATH-Gene3D, PIRSF, Superfamily and TIGRFAMs. Finally, we discuss the relocation of the HMMER webserver to the European Bioinformatics Institute and the potential impact that this will have. PMID:25943547

  15. SAS: A Secure Aglet Server

    SciTech Connect

    Jean, Evens; Jiao, Yu; Hurson, Ali R.; Potok, Thomas E

    2007-01-01

    Despite the fact that mobile agents have received increasing attention in various research efforts, the use of the paradigm in practical applications has yet to fully emerge. With the presence of infrastructure to support the development of mobile agent applications, security concerns act as the primary deterrent against such trends. Numerous studies have been conducted to address the security issues of mobile agents with a strong focus on the theoretical aspect of the problem. This work attempts to bridge the gap from theory to practice by analyzing the security mechanisms available in Aglet. We herein propose several mechanisms, stemming from theoretical advancements, intended to protect both agents and hosts in order to foster the development of business applications that fully exploit the benefits of agent technology. The proposed mechanisms lay the foundation for implementation of application specific protocols dotted with access control, secured communication and ability to detect tampering of agent data. We demonstrate our contribution through application scenarios of a prototyped Information Retrieval system.

  16. San Mateo County's Server Information Program (S.I.P.): A Community-Based Alcohol Server Training Program.

    ERIC Educational Resources Information Center

    de Miranda, John

    The field of alcohol server awareness and training has grown dramatically in the past several years and the idea of training servers to reduce alcohol problems has become a central fixture in the current alcohol policy debate. The San Mateo County, California Server Information Program (SIP) is a community-based prevention strategy designed to…

  17. DMINDA: an integrated web server for DNA motif identification and analyses

    PubMed Central

    Ma, Qin; Zhang, Hanyuan; Mao, Xizeng; Zhou, Chuan; Liu, Bingqiang; Chen, Xin; Xu, Ying

    2014-01-01

    DMINDA (DNA motif identification and analyses) is an integrated web server for DNA motif identification and analyses, which is accessible at http://csbl.bmb.uga.edu/DMINDA/. This web site is freely available to all users and there is no login requirement. This server provides a suite of cis-regulatory motif analysis functions on DNA sequences, which are important to elucidation of the mechanisms of transcriptional regulation: (i) de novo motif finding for a given set of promoter sequences along with statistical scores for the predicted motifs derived based on information extracted from a control set, (ii) scanning motif instances of a query motif in provided genomic sequences, (iii) motif comparison and clustering of identified motifs, and (iv) co-occurrence analyses of query motifs in given promoter sequences. The server is powered by a backend computer cluster with over 150 computing nodes, and is particularly useful for motif prediction and analyses in prokaryotic genomes. We believe that DMINDA, as a new and comprehensive web server for cis-regulatory motif finding and analyses, will benefit the genomic research community in general and prokaryotic genome researchers in particular. PMID:24753419

  18. AGGRESCAN3D (A3D): server for prediction of aggregation properties of protein structures

    PubMed Central

    Zambrano, Rafael; Jamroz, Michal; Szczasiuk, Agata; Pujols, Jordi; Kmiecik, Sebastian; Ventura, Salvador

    2015-01-01

    Protein aggregation underlies an increasing number of disorders and constitutes a major bottleneck in the development of therapeutic proteins. Our present understanding on the molecular determinants of protein aggregation has crystalized in a series of predictive algorithms to identify aggregation-prone sites. A majority of these methods rely only on sequence. Therefore, they find difficulties to predict the aggregation properties of folded globular proteins, where aggregation-prone sites are often not contiguous in sequence or buried inside the native structure. The AGGRESCAN3D (A3D) server overcomes these limitations by taking into account the protein structure and the experimental aggregation propensity scale from the well-established AGGRESCAN method. Using the A3D server, the identified aggregation-prone residues can be virtually mutated to design variants with increased solubility, or to test the impact of pathogenic mutations. Additionally, A3D server enables to take into account the dynamic fluctuations of protein structure in solution, which may influence aggregation propensity. This is possible in A3D Dynamic Mode that exploits the CABS-flex approach for the fast simulations of flexibility of globular proteins. The A3D server can be accessed at http://biocomp.chem.uw.edu.pl/A3D/. PMID:25883144

  19. RNAssess--a web server for quality assessment of RNA 3D structures.

    PubMed

    Lukasiak, Piotr; Antczak, Maciej; Ratajczak, Tomasz; Szachniuk, Marta; Popenda, Mariusz; Adamiak, Ryszard W; Blazewicz, Jacek

    2015-07-01

    Nowadays, various methodologies can be applied to model RNA 3D structure. Thus, the plausible quality assessment of 3D models has a fundamental impact on the progress of structural bioinformatics. Here, we present RNAssess server, a novel tool dedicated to visual evaluation of RNA 3D models in the context of the known reference structure for a wide range of accuracy levels (from atomic to the whole molecule perspective). The proposed server is based on the concept of local neighborhood, defined as a set of atoms observed within a sphere localized around a central atom of a particular residue. A distinctive feature of our server is the ability to perform simultaneous visual analysis of the model-reference structure coherence. RNAssess supports the quality assessment through delivering both static and interactive visualizations that allows an easy identification of native-like models and/or chosen structural regions of the analyzed molecule. A combination of results provided by RNAssess allows us to rank analyzed models. RNAssess offers new route to a fast and efficient 3D model evaluation suitable for the RNA-Puzzles challenge. The proposed automated tool is implemented as a free and open to all users web server with an user-friendly interface and can be accessed at: http://rnassess.cs.put.poznan.pl/. PMID:26068469

  20. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven

  1. LigSearch: a knowledge-based web server to identify likely ligands for a protein target

    SciTech Connect

    Beer, Tjaart A. P. de; Laskowski, Roman A.; Duban, Mark-Eugene; Chan, A. W. Edith; Anderson, Wayne F.; Thornton, Janet M.

    2013-12-01

    LigSearch is a web server for identifying ligands likely to bind to a given protein. Identifying which ligands might bind to a protein before crystallization trials could provide a significant saving in time and resources. LigSearch, a web server aimed at predicting ligands that might bind to and stabilize a given protein, has been developed. Using a protein sequence and/or structure, the system searches against a variety of databases, combining available knowledge, and provides a clustered and ranked output of possible ligands. LigSearch can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/LigSearch.

  2. Reference-frame-independent quantum-key-distribution server with a telecom tether for an on-chip client.

    PubMed

    Zhang, P; Aungskunsiri, K; Martín-López, E; Wabnig, J; Lobino, M; Nock, R W; Munns, J; Bonneau, D; Jiang, P; Li, H W; Laing, A; Rarity, J G; Niskanen, A O; Thompson, M G; O'Brien, J L

    2014-04-01

    We demonstrate a client-server quantum key distribution (QKD) scheme. Large resources such as laser and detectors are situated at the server side, which is accessible via telecom fiber to a client requiring only an on-chip polarization rotator, which may be integrated into a handheld device. The detrimental effects of unstable fiber birefringence are overcome by employing the reference-frame-independent QKD protocol for polarization qubits in polarization maintaining fiber, where standard QKD protocols fail, as we show for comparison. This opens the way for quantum enhanced secure communications between companies and members of the general public equipped with handheld mobile devices, via telecom-fiber tethering.

  3. PSSweb: protein structural statistics web server.

    PubMed

    Gaillard, Thomas; Stote, Roland H; Dejaegere, Annick

    2016-07-01

    With the increasing number of protein structures available, there is a need for tools capable of automating the comparison of ensembles of structures, a common requirement in structural biology and bioinformatics. PSSweb is a web server for protein structural statistics. It takes as input an ensemble of PDB files of protein structures, performs a multiple sequence alignment and computes structural statistics for each position of the alignment. Different optional functionalities are proposed: structure superposition, Cartesian coordinate statistics, dihedral angle calculation and statistics, and a cluster analysis based on dihedral angles. An interactive report is generated, containing a summary of the results, tables, figures and 3D visualization of superposed structures. The server is available at http://pssweb.org.

  4. PSSweb: protein structural statistics web server

    PubMed Central

    Gaillard, Thomas; Stote, Roland H.; Dejaegere, Annick

    2016-01-01

    With the increasing number of protein structures available, there is a need for tools capable of automating the comparison of ensembles of structures, a common requirement in structural biology and bioinformatics. PSSweb is a web server for protein structural statistics. It takes as input an ensemble of PDB files of protein structures, performs a multiple sequence alignment and computes structural statistics for each position of the alignment. Different optional functionalities are proposed: structure superposition, Cartesian coordinate statistics, dihedral angle calculation and statistics, and a cluster analysis based on dihedral angles. An interactive report is generated, containing a summary of the results, tables, figures and 3D visualization of superposed structures. The server is available at http://pssweb.org. PMID:27174930

  5. Energy Servers Deliver Clean, Affordable Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    K.R. Sridhar developed a fuel cell device for Ames Research Center, that could use solar power to split water into oxygen for breathing and hydrogen for fuel on Mars. Sridhar saw the potential of the technology, when reversed, to create clean energy on Earth. He founded Bloom Energy, of Sunnyvale, California, to advance the technology. Today, the Bloom Energy Server is providing cost-effective, environmentally friendly energy to a host of companies such as eBay, Google, and The Coca-Cola Company. Bloom's NASA-derived Energy Servers generate energy that is about 67-percent cleaner than a typical coal-fired power plant when using fossil fuels and 100-percent cleaner with renewable fuels.

  6. Implementing a secure client/server application

    SciTech Connect

    Kissinger, B.A.

    1994-08-01

    There is an increasing rise in attacks and security breaches on computer systems. Particularly vulnerable are systems that exchange user names and passwords directly across a network without encryption. These kinds of systems include many commercial-off-the-shelf client/server applications. A secure technique for authenticating computer users and transmitting passwords through the use of a trusted {open_quotes}broker{close_quotes} and public/private keys is described in this paper.

  7. COMPASS server for remote homology inference.

    PubMed

    Sadreyev, Ruslan I; Tang, Ming; Kim, Bong-Hyun; Grishin, Nick V

    2007-07-01

    COMPASS is a method for homology detection and local alignment construction based on the comparison of multiple sequence alignments (MSAs). The method derives numerical profiles from given MSAs, constructs local profile-profile alignments and analytically estimates E-values for the detected similarities. Until now, COMPASS was only available for download and local installation. Here, we present a new web server featuring the latest version of COMPASS, which provides (i) increased sensitivity and selectivity of homology detection; (ii) longer, more complete alignments; and (iii) faster computational speed. After submission of the query MSA or single sequence, the server performs searches versus a user-specified database. The server includes detailed and intuitive control of the search parameters. A flexible output format, structured similarly to BLAST and PSI-BLAST, provides an easy way to read and analyze the detected profile similarities. Brief help sections are available for all input parameters and output options, along with detailed documentation. To illustrate the value of this tool for protein structure-functional prediction, we present two examples of detecting distant homologs for uncharacterized protein families. Available at http://prodata.swmed.edu/compass. PMID:17517780

  8. Las Vegas

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image of Las Vegas, NV was acquired on August, 2000 and covers an area 42 km (25 miles) wide and 30 km (18 miles) long. The image displays three bands of the reflected visible and infrared wavelength region, with a spatial resolution of 15 m. McCarran International Airport to the south and Nellis Air Force Base to the NE are the two major airports visible. Golf courses appear as bright red areas of worms. The first settlement in Las Vegas (which is Spanish for The Meadows) was recorded back in the early 1850s when the Mormon church, headed by Brigham Young, sent a mission of 30 men to construct a fort and teach agriculture to the Indians. Las Vegas became a city in 1905 when the railroad announced this city was to be a major division point. Prior to legalized gambling in 1931, Las Vegas was developing as an agricultural area. Las Vegas' fame as a resort area became prominent after World War II. The image is located at 36.1 degrees north latitude and 115.1 degrees west longitude.

    The U.S. science team is located at NASA's Jet Propulsion Laboratory, Pasadena, Calif. The Terra mission is part of NASA's Science Mission Directorate.

  9. SPEER-SERVER: a web server for prediction of protein specificity determining sites.

    PubMed

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J; Panchenko, Anna R; Chakrabarti, Saikat

    2012-07-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids' Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/.

  10. A client/server system for remote diagnosis of cardiac arrhythmias.

    PubMed

    Tong, D A; Gajjala, V; Widman, L E

    1995-01-01

    Health care practitioners are often faced with the task of interpreting complex heart rhythms from electrocardiograms (ECGs) produced by 12-lead ECG machines, ambulatory (Holter) monitoring systems, and intensive-care unit monitors. Usually, the practitioner caring for the patient does not have specialized training in cardiology or in ECG interpretation; and commercial programs that interpret 12-lead ECGs have been well-documented in the medical literature to perform poorly at analyzing cardiac rhythm. We believe that a system capable of providing comprehensive ECG interpretation as well as access to online consultations will be beneficial to the health care system. We hypothesized that we could develop a client-server based telemedicine system capable of providing access to (1) an on-line knowledge-based system for remote diagnosis of cardiac arrhythmias and (2) an on-line cardiologist for real-time interactive consultation using readily available resources on the Internet. Furthermore, we hypothesized that Macintosh and Microsoft Windows-based personal computers running an X server could function as the delivery platform for the developed system. Although we were successful in developing such a system that will run efficiently on a UNIX-based work-station, current personal computer X server software are not capable of running the system efficiently.

  11. ORION: a web server for protein fold recognition and structure prediction using evolutionary hybrid profiles.

    PubMed

    Ghouzam, Yassine; Postic, Guillaume; Guerin, Pierre-Edouard; de Brevern, Alexandre G; Gelly, Jean-Christophe

    2016-01-01

    Protein structure prediction based on comparative modeling is the most efficient way to produce structural models when it can be performed. ORION is a dedicated webserver based on a new strategy that performs this task. The identification by ORION of suitable templates is performed using an original profile-profile approach that combines sequence and structure evolution information. Structure evolution information is encoded into profiles using structural features, such as solvent accessibility and local conformation -with Protein Blocks-, which give an accurate description of the local protein structure. ORION has recently been improved, increasing by 5% the quality of its results. The ORION web server accepts a single protein sequence as input and searches homologous protein structures within minutes. Various databases such as PDB, SCOP and HOMSTRAD can be mined to find an appropriate structural template. For the modeling step, a protein 3D structure can be directly obtained from the selected template by MODELLER and displayed with global and local quality model estimation measures. The sequence and the predicted structure of 4 examples from the CAMEO server and a recent CASP11 target from the 'Hard' category (T0818-D1) are shown as pertinent examples. Our web server is accessible at http://www.dsimb.inserm.fr/ORION/. PMID:27319297

  12. ORION: a web server for protein fold recognition and structure prediction using evolutionary hybrid profiles

    PubMed Central

    Ghouzam, Yassine; Postic, Guillaume; Guerin, Pierre-Edouard; de Brevern, Alexandre G.; Gelly, Jean-Christophe

    2016-01-01

    Protein structure prediction based on comparative modeling is the most efficient way to produce structural models when it can be performed. ORION is a dedicated webserver based on a new strategy that performs this task. The identification by ORION of suitable templates is performed using an original profile-profile approach that combines sequence and structure evolution information. Structure evolution information is encoded into profiles using structural features, such as solvent accessibility and local conformation —with Protein Blocks—, which give an accurate description of the local protein structure. ORION has recently been improved, increasing by 5% the quality of its results. The ORION web server accepts a single protein sequence as input and searches homologous protein structures within minutes. Various databases such as PDB, SCOP and HOMSTRAD can be mined to find an appropriate structural template. For the modeling step, a protein 3D structure can be directly obtained from the selected template by MODELLER and displayed with global and local quality model estimation measures. The sequence and the predicted structure of 4 examples from the CAMEO server and a recent CASP11 target from the ‘Hard’ category (T0818-D1) are shown as pertinent examples. Our web server is accessible at http://www.dsimb.inserm.fr/ORION/. PMID:27319297

  13. PRince: a web server for structural and physicochemical analysis of protein-RNA interface.

    PubMed

    Barik, Amita; Mishra, Abhishek; Bahadur, Ranjit Prasad

    2012-07-01

    We have developed a web server, PRince, which analyzes the structural features and physicochemical properties of the protein-RNA interface. Users need to submit a PDB file containing the atomic coordinates of both the protein and the RNA molecules in complex form (in '.pdb' format). They should also mention the chain identifiers of interacting protein and RNA molecules. The size of the protein-RNA interface is estimated by measuring the solvent accessible surface area buried in contact. For a given protein-RNA complex, PRince calculates structural, physicochemical and hydration properties of the interacting surfaces. All these parameters generated by the server are presented in a tabular format. The interacting surfaces can also be visualized with software plug-in like Jmol. In addition, the output files containing the list of the atomic coordinates of the interacting protein, RNA and interface water molecules can be downloaded. The parameters generated by PRince are novel, and users can correlate them with the experimentally determined biophysical and biochemical parameters for better understanding the specificity of the protein-RNA recognition process. This server will be continuously upgraded to include more parameters. PRince is publicly accessible and free for use. Available at http://www.facweb.iitkgp.ernet.in/~rbahadur/prince/home.html.

  14. ORION: a web server for protein fold recognition and structure prediction using evolutionary hybrid profiles.

    PubMed

    Ghouzam, Yassine; Postic, Guillaume; Guerin, Pierre-Edouard; de Brevern, Alexandre G; Gelly, Jean-Christophe

    2016-06-20

    Protein structure prediction based on comparative modeling is the most efficient way to produce structural models when it can be performed. ORION is a dedicated webserver based on a new strategy that performs this task. The identification by ORION of suitable templates is performed using an original profile-profile approach that combines sequence and structure evolution information. Structure evolution information is encoded into profiles using structural features, such as solvent accessibility and local conformation -with Protein Blocks-, which give an accurate description of the local protein structure. ORION has recently been improved, increasing by 5% the quality of its results. The ORION web server accepts a single protein sequence as input and searches homologous protein structures within minutes. Various databases such as PDB, SCOP and HOMSTRAD can be mined to find an appropriate structural template. For the modeling step, a protein 3D structure can be directly obtained from the selected template by MODELLER and displayed with global and local quality model estimation measures. The sequence and the predicted structure of 4 examples from the CAMEO server and a recent CASP11 target from the 'Hard' category (T0818-D1) are shown as pertinent examples. Our web server is accessible at http://www.dsimb.inserm.fr/ORION/.

  15. Bringing Ad-Hoc Analytics to Big Earth Data: the EarthServer Experience

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2014-05-01

    From the commonly accepted Vs defining the Big Data challenge - volume, velocity, variety - we more and more learn that the sheer volume is not the only, and often not even the decisive factor inhibiting access and analytics. In particular variety of data is a frequent core issue, posing manifold issues. Based on this observation we claim that a key aspect to analytics is the freedom to ask any questions, simple or complex, anytime and combining any choice of data structures, whatever diverging they may be. Actually, techniques for such "ad-hoc queries" we can learn from classical databases. Their concept of high-level query languages brings along several benefits: a uniform semantic, allowing machine-to-machine communication, including automatic generation of queries; massive server-side optimization and parallelization; and building attractive client interfaces hiding the query syntax from casual users while allowing power users to utilize it. However, these benefits used to be available only on tabular and set oriented data, text, and - more recently - graph data. With the advent of Array Databases, they become available on large multidimensional raster data assets as well, getting one step closer to the Holy Grail of itnegrated, uniform retrieval for users. ErthServer is a transatlantic initiative setting up operationa linfrastructures based on this paradigm. In our talk, we present core EarthServer technology concepts as well as a spectrum of Earth Science applications utilizing the EarthServer platform for versatile, visualisation supported analytics services. Further, we discuss the substantial impact EarthServer is having on Big Geo Data standardization in OGC and ISO. Time and Internet connection permitting a live demo can be presented.

  16. Energy Efficiency in Small Server Rooms: Field Surveys and Findings

    SciTech Connect

    Cheung, Iris; Greenberg, Steve; Mahdavi, Roozbeh; Brown, Richard; Tschudi, William

    2014-08-11

    Fifty-seven percent of US servers are housed in server closets, server rooms, and localized data centers, in what are commonly referred to as small server rooms, which comprise 99percent of all server spaces in the US. While many mid-tier and enterprise-class data centers are owned by large corporations that consider energy efficiency a goal to minimize business operating costs, small server rooms typically are not similarly motivated. They are characterized by decentralized ownership and management and come in many configurations, which creates a unique set of efficiency challenges. To develop energy efficiency strategies for these spaces, we surveyed 30 small server rooms across eight institutions, and selected four of them for detailed assessments. The four rooms had Power Usage Effectiveness (PUE) values ranging from 1.5 to 2.1. Energy saving opportunities ranged from no- to low-cost measures such as raising cooling set points and better airflow management, to more involved but cost-effective measures including server consolidation and virtualization, and dedicated cooling with economizers. We found that inefficiencies mainly resulted from organizational rather than technical issues. Because of the inherent space and resource limitations, the most effective measure is to operate servers through energy-efficient cloud-based services or well-managed larger data centers, rather than server rooms. Backup power requirement, and IT and cooling efficiency should be evaluated to minimize energy waste in the server space. Utility programs are instrumental in raising awareness and spreading technical knowledge on server operation, and the implementation of energy efficiency measures in small server rooms.

  17. SARA-Coffee web server, a tool for the computation of RNA sequence and structure multiple alignments

    PubMed Central

    Di Tommaso, Paolo; Bussotti, Giovanni; Kemena, Carsten; Capriotti, Emidio; Chatzou, Maria; Prieto, Pablo; Notredame, Cedric

    2014-01-01

    This article introduces the SARA-Coffee web server; a service allowing the online computation of 3D structure based multiple RNA sequence alignments. The server makes it possible to combine sequences with and without known 3D structures. Given a set of sequences SARA-Coffee outputs a multiple sequence alignment along with a reliability index for every sequence, column and aligned residue. SARA-Coffee combines SARA, a pairwise structural RNA aligner with the R-Coffee multiple RNA aligner in a way that has been shown to improve alignment accuracy over most sequence aligners when enough structural data is available. The server can be accessed from http://tcoffee.crg.cat/apps/tcoffee/do:saracoffee. PMID:24972831

  18. SARA-Coffee web server, a tool for the computation of RNA sequence and structure multiple alignments.

    PubMed

    Di Tommaso, Paolo; Bussotti, Giovanni; Kemena, Carsten; Capriotti, Emidio; Chatzou, Maria; Prieto, Pablo; Notredame, Cedric

    2014-07-01

    This article introduces the SARA-Coffee web server; a service allowing the online computation of 3D structure based multiple RNA sequence alignments. The server makes it possible to combine sequences with and without known 3D structures. Given a set of sequences SARA-Coffee outputs a multiple sequence alignment along with a reliability index for every sequence, column and aligned residue. SARA-Coffee combines SARA, a pairwise structural RNA aligner with the R-Coffee multiple RNA aligner in a way that has been shown to improve alignment accuracy over most sequence aligners when enough structural data is available. The server can be accessed from http://tcoffee.crg.cat/apps/tcoffee/do:saracoffee.

  19. CheD: chemical database compilation tool, Internet server, and client for SQL servers.

    PubMed

    Trepalin, S V; Yarkov, A V

    2001-01-01

    An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.

  20. MARSIS data and simulation exploited using array databases: PlanetServer/EarthServer for sounding radars

    NASA Astrophysics Data System (ADS)

    Cantini, Federico; Pio Rossi, Angelo; Orosei, Roberto; Baumann, Peter; Misev, Dimitar; Oosthoek, Jelmer; Beccati, Alan; Campalani, Piero; Unnithan, Vikram

    2014-05-01

    MARSIS is an orbital synthetic aperture radar for both ionosphere and subsurface sounding on board ESA's Mars Express (Picardi et al. 2005). It transmits electromagnetic pulses centered at 1.8, 3, 4 or 5 MHz that penetrate below the surface and are reflected by compositional and/or structural discontinuities in the subsurface of Mars. MARSIS data are available as a collection of single orbit data files. The availability of tools for a more effective access to such data would greatly ease data analysis and exploitation by the community of users. For this purpose, we are developing a database built on the raster database management system RasDaMan (e.g. Baumann et al., 1994), to be populated with MARSIS data and integrated in the PlanetServer/EarthServer (e.g. Oosthoek et al., 2013; Rossi et al., this meeting) project. The data (and related metadata) are stored in the db for each frequency used by MARSIS radar. The capability of retrieving data belonging to a certain orbit or to multiple orbit on the base of latitute/longitude boundaries is a key requirement of the db design, allowing, besides the "classical" radargram representation of the data, and in area with sufficiently hight orbit density, a 3D data extraction, subset and analysis of subsurface structures. Moreover the use of the OGC WCPS (Web Coverage Processing Service) standard can allow calculations on database query results for multiple echoes and/or subsets of a certain data product. Because of the low directivity of its dipole antenna, MARSIS receives echoes from portions of the surface of Mars that are distant from nadir and can be mistakenly interpreted as subsurface echoes. For this reason, methods have been developed to simulate surface echoes (e.g. Nouvel et al., 2004), to reveal the true origin of an echo through comparison with instrument data. These simulations are usually time-consuming, and so far have been performed either on a case-by-case basis or in some simplified form. A code for

  1. MARSIS data and simulation exploited using array databases: PlanetServer/EarthServer for sounding radars

    NASA Astrophysics Data System (ADS)

    Cantini, Federico; Pio Rossi, Angelo; Orosei, Roberto; Baumann, Peter; Misev, Dimitar; Oosthoek, Jelmer; Beccati, Alan; Campalani, Piero; Unnithan, Vikram

    2014-05-01

    MARSIS is an orbital synthetic aperture radar for both ionosphere and subsurface sounding on board ESA's Mars Express (Picardi et al. 2005). It transmits electromagnetic pulses centered at 1.8, 3, 4 or 5 MHz that penetrate below the surface and are reflected by compositional and/or structural discontinuities in the subsurface of Mars. MARSIS data are available as a collection of single orbit data files. The availability of tools for a more effective access to such data would greatly ease data analysis and exploitation by the community of users. For this purpose, we are developing a database built on the raster database management system RasDaMan (e.g. Baumann et al., 1994), to be populated with MARSIS data and integrated in the PlanetServer/EarthServer (e.g. Oosthoek et al., 2013; Rossi et al., this meeting) project. The data (and related metadata) are stored in the db for each frequency used by MARSIS radar. The capability of retrieving data belonging to a certain orbit or to multiple orbit on the base of latitute/longitude boundaries is a key requirement of the db design, allowing, besides the "classical" radargram representation of the data, and in area with sufficiently hight orbit density, a 3D data extraction, subset and analysis of subsurface structures. Moreover the use of the OGC WCPS (Web Coverage Processing Service) standard can allow calculations on database query results for multiple echoes and/or subsets of a certain data product. Because of the low directivity of its dipole antenna, MARSIS receives echoes from portions of the surface of Mars that are distant from nadir and can be mistakenly interpreted as subsurface echoes. For this reason, methods have been developed to simulate surface echoes (e.g. Nouvel et al., 2004), to reveal the true origin of an echo through comparison with instrument data. These simulations are usually time-consuming, and so far have been performed either on a case-by-case basis or in some simplified form. A code for

  2. SeeHaBITaT: A server on bioinformatics applications for Tospoviruses and other species.

    PubMed

    Sakthivel, Seethalakshmi; Habeeb, S K M

    2016-06-01

    Plant viruses are important limiting factors in agricultural productivity. Tospovirus is one of the severe plant pathogens, causing damage to economically important food and ornamental crops worldwide through thrips as vectors. Database application resources exclusively on this virus would help to design better control measures, which aren't available. SeeHaBITaT is a unique and exclusive web based server providing work bench to perform computational research on tospoviruses and its species. SeeHaBITaT hosts Tospoviruses specific database Togribase, MOLBIT, SRMBIT and SS with PDB. These applications would be of immense help to the Tospovirus scientific community. The server could be accessed at http://bit.srmuniv.ac.in/. PMID:27354938

  3. Hybrid Spatial Query Processing between a Server and a Wireless Sensor Network

    NASA Astrophysics Data System (ADS)

    Kim, Min Soo; Kim, Ju Wan; Kim, Myoung Ho

    There has been much interest in a spatial query which acquires sensor readings from sensor nodes inside specified geographical area of interests. A centralized approach performs the spatial query at a server after acquiring all sensor readings. However, it incurs high wireless transmission cost in accessing all sensor nodes. Therefore, various in-network spatial search methods have been proposed, which focus on reducing the wireless transmission cost. However, the in-network methods sometimes incur unnecessary wireless transmissions because of dead space, which is spatially indexed but does not contain real data. In this paper, we propose a hybrid spatial query processing algorithm which removes the unnecessary wireless transmissions. The main idea of the hybrid algorithm is to find results of a spatial query at a server in advance and use the results in removing the unnecessary wireless transmissions at a sensor network. We compare the in-network method through several experiments and clarify our algorithm's remarkable features.

  4. Whisker: a client-server high-performance multimedia research control system.

    PubMed

    Cardinal, Rudolf N; Aitken, Michael R F

    2010-11-01

    We describe an original client-server approach to behavioral research control and the Whisker system, a specific implementation of this design. The server process controls several types of hardware, including digital input/output devices, multiple graphical monitors and touchscreens, keyboards, mice, and sound cards. It provides a way to access this hardware for client programs, communicating with them via a simple text-based network protocol based on the standard Internet protocol. Clients to implement behavioral tasks may be written in any network-capable programming language. Applications to date have been in experimental psychology and behavioral and cognitive neuroscience, using rodents, humans, nonhuman primates, dogs, pigs, and birds. This system is flexible and reliable, although there are potential disadvantages in terms of complexity. Its design, features, and performance are described. PMID:21139173

  5. Client-server, distributed database strategies in a healthcare record system for a homeless population.

    PubMed Central

    Chueh, H. C.; Barnett, G. O.

    1993-01-01

    A computer-based healthcare record system being developed for Boston's Healthcare for the Homeless Program (BHCHP) uses client-server and distributed database technologies to enhance the delivery of healthcare to patients of this unusual population. The needs of physicians, nurses and social workers are specifically addressed in the application interface so that an integrated approach to healthcare for this population can be facilitated. These patients and their providers have unique medical information needs that are supported by both database and applications technology. To integrate the information capabilities with the actual practice of providers of care to the homeless, this computer-based record system is designed for remote and portable use over regular phone lines. An initial standalone system is being used at one major BHCHP site of care. This project describes methods for creating a secure, accessible, and scalable computer-based medical record using client-server, distributed database design. PMID:8130445

  6. AGP: a multimethods web server for alignment-free genome phylogeny.

    PubMed

    Cheng, Jinkui; Cao, Fuliang; Liu, Zhihua

    2013-05-01

    Phylogenetic analysis based on alignment method meets huge challenges when dealing with whole-genome sequences, for example, recombination, shuffling, and rearrangement of sequences. Thus, various alignment-free methods for phylogeny construction have been proposed. However, most of these methods have not been implemented as tools or web servers. Researchers cannot use these methods easily with their data sets. To facilitate the usage of various alignment-free methods, we implemented most of the popular alignment-free methods and constructed a user-friendly web server for alignment-free genome phylogeny (AGP). AGP integrated the phylogenetic tree construction, visualization, and comparison functions together. Both AGP and all source code of the methods are available at http://www.herbbol.org:8000/agp (last accessed February 26, 2013). AGP will facilitate research in the field of whole-genome phylogeny and comparison.

  7. [An internet based medical communication server].

    PubMed

    Hu, B; Bai, J; Ye, D

    1998-04-01

    The telemedicine and medical conference usually need multi-point to multi-point communication. Because the communication users can be patients, specialists or medical centers, they have different communication ratios and different physical connection, therefore, this kind of communication is complicated and limited by the communication ratios. In this paper, to meet the requirements of medical communication, we presented a concept of medical communication server which is able to receive data packages and deliver them according to the request of clients, and described its implementation in Windows 95 environment by using Windows Sockets.

  8. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  9. Data Access System for Hydrology

    NASA Astrophysics Data System (ADS)

    Whitenack, T.; Zaslavsky, I.; Valentine, D.; Djokic, D.

    2007-12-01

    As part of the CUAHSI HIS (Consortium of Universities for the Advancement of Hydrologic Science, Inc., Hydrologic Information System), the CUAHSI HIS team has developed Data Access System for Hydrology or DASH. DASH is based on commercial off the shelf technology, which has been developed in conjunction with a commercial partner, ESRI. DASH is a web-based user interface, developed in ASP.NET developed using ESRI ArcGIS Server 9.2 that represents a mapping, querying and data retrieval interface over observation and GIS databases, and web services. This is the front end application for the CUAHSI Hydrologic Information System Server. The HIS Server is a software stack that organizes observation databases, geographic data layers, data importing and management tools, and online user interfaces such as the DASH application, into a flexible multi- tier application for serving both national-level and locally-maintained observation data. The user interface of the DASH web application allows online users to query observation networks by location and attributes, selecting stations in a user-specified area where a particular variable was measured during a given time interval. Once one or more stations and variables are selected, the user can retrieve and download the observation data for further off-line analysis. The DASH application is highly configurable. The mapping interface can be configured to display map services from multiple sources in multiple formats, including ArcGIS Server, ArcIMS, and WMS. The observation network data is configured in an XML file where you specify the network's web service location and its corresponding map layer. Upon initial deployment, two national level observation networks (USGS NWIS daily values and USGS NWIS Instantaneous values) are already pre-configured. There is also an optional login page which can be used to restrict access as well as providing a alternative to immediate downloads. For large request, users would be notified via

  10. Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers

    NASA Technical Reports Server (NTRS)

    Tumer, K.; Lawson, J.

    2003-01-01

    Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.

  11. Electronic document distribution: Design of the anonymous FTP Langley Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.

    1994-01-01

    An experimental electronic dissemination project, the Langley Technical Report Server (LTRS), has been undertaken to determine the feasibility of delivering Langley technical reports directly to the desktops of researchers worldwide. During the first six months, over 4700 accesses occurred and over 2400 technical reports were distributed. This usage indicates the high level of interest that researchers have in performing literature searches and retrieving technical reports at their desktops. The initial system was developed with existing resources and technology. The reports are stored as files on an inexpensive UNIX workstation and are accessible over the Internet. This project will serve as a foundation for ongoing projects at other NASA centers that will allow for greater access to NASA technical reports.

  12. The EarthServer project: Exploiting Identity Federations, Science Gateways and Social and Mobile Clients for Big Earth Data Analysis

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Messina, Antonio; Pappalardo, Marco; Passaro, Gianluca

    2013-04-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. Six Lighthouse Applications are being established in EarthServer, each of which poses distinct challenges on Earth Data Analytics: Cryospheric Science, Airborne Science, Atmospheric Science, Geology, Oceanography, and Planetary Science. Altogether, they cover all Earth Science domains; the Planetary Science use case has been added to challenge concepts and standards in non-standard environments. In addition, EarthLook (maintained by Jacobs University) showcases use of OGC standards in 1D through 5D use cases. In this contribution we will report on the first applications integrated in the EarthServer Science Gateway and on the clients for mobile appliances developed to access them. We will also show how federated and social identity services can allow Big Earth Data Providers to expose their data in a distributed environment keeping a strict and fine-grained control on user authentication and authorisation. The degree of fulfilment of the EarthServer implementation with the recommendations made in the recent TERENA Study on

  13. VCL: a high performance virtual CD library server

    NASA Astrophysics Data System (ADS)

    Wan, Jiguang; Xie, ChangSheng; Tan, Zhihu

    2005-09-01

    With the increasing of CD data in internet, CD mirror server has become the new technology. Considering the performance requirement of the traditional CD mirror server, we present a novel high performance VCL (Virtual CD Library) server. What makes VCL server superior is the two patented technologies: a new caching architecture and an efficient network protocol specifically tailored to VCL applications. VCL server is built based on an innovative caching technology. It employs a two-level cache structure on both a client side and the server side. Instead of using existing network and file protocols such as SMB/CIFS etc that are generally used by existing CD server, we have developed a set of new protocols specifically suitable to VCL environment. The new protocol is a native VCL protocol built directly on TCP/IP protocol. VCL protocol optimizes data transfer performance for block level data as opposed to file system level data. The advantage of using block level native protocol is reduced network-bandwidth requirement to transfer same amount of data as compared to file system level protocol. Our experiment and independent testing have shown that VCL servers allow much more number of concurrent users than existing products. For very high resolution DVD videos, VCL with 100Mbps NIC supports over 10 concurrent users viewing the same or different videos simultaneously. For VCD videos, the same VCL can support over 65 concurrent users viewing videos simultaneously. For data CDs, the VCL can support over 500 concurrent data stream users.

  14. OPC Data Acquisition Server for CPDev Engineering Environment

    NASA Astrophysics Data System (ADS)

    Rzońca, Dariusz; Sadolewski, Jan; Trybus, Bartosz

    OPC Server has been created for the CPDev engineering environment, which provides classified process data for OPC client applications. Hierarchical Coloured Petri nets are used at design stage to model communications of the server with CPDev target controllers. Implementation involves an universal interface for acquisition data via different communication protocols like Modbus or .NET Remoting.

  15. DISULFIND: a disulfide bonding state and cysteine connectivity prediction server

    PubMed Central

    Ceroni, Alessio; Passerini, Andrea; Vullo, Alessandro; Frasconi, Paolo

    2006-01-01

    DISULFIND is a server for predicting the disulfide bonding state of cysteines and their disulfide connectivity starting from sequence alone. Optionally, disulfide connectivity can be predicted from sequence and a bonding state assignment given as input. The output is a simple visualization of the assigned bonding state (with confidence degrees) and the most likely connectivity patterns. The server is available at . PMID:16844986

  16. Client-Server Connection Status Monitoring Using Ajax Push Technology

    NASA Technical Reports Server (NTRS)

    Lamongie, Julien R.

    2008-01-01

    This paper describes how simple client-server connection status monitoring can be implemented using Ajax (Asynchronous JavaScript and XML), JSF (Java Server Faces) and ICEfaces technologies. This functionality is required for NASA LCS (Launch Control System) displays used in the firing room for the Constellation project. Two separate implementations based on two distinct approaches are detailed and analyzed.

  17. Rtools: a web server for various secondary structural analyses on single RNA sequences.

    PubMed

    Hamada, Michiaki; Ono, Yukiteru; Kiryu, Hisanori; Sato, Kengo; Kato, Yuki; Fukunaga, Tsukasa; Mori, Ryota; Asai, Kiyoshi

    2016-07-01

    The secondary structures, as well as the nucleotide sequences, are the important features of RNA molecules to characterize their functions. According to the thermodynamic model, however, the probability of any secondary structure is very small. As a consequence, any tool to predict the secondary structures of RNAs has limited accuracy. On the other hand, there are a few tools to compensate the imperfect predictions by calculating and visualizing the secondary structural information from RNA sequences. It is desirable to obtain the rich information from those tools through a friendly interface. We implemented a web server of the tools to predict secondary structures and to calculate various structural features based on the energy models of secondary structures. By just giving an RNA sequence to the web server, the user can get the different types of solutions of the secondary structures, the marginal probabilities such as base-paring probabilities, loop probabilities and accessibilities of the local bases, the energy changes by arbitrary base mutations as well as the measures for validations of the predicted secondary structures. The web server is available at http://rtools.cbrc.jp, which integrates software tools, CentroidFold, CentroidHomfold, IPKnot, CapR, Raccess, Rchange and RintD. PMID:27131356

  18. PROTEUS2: a web server for comprehensive protein structure prediction and structure-based annotation.

    PubMed

    Montgomerie, Scott; Cruz, Joseph A; Shrivastava, Savita; Arndt, David; Berjanskii, Mark; Wishart, David S

    2008-07-01

    PROTEUS2 is a web server designed to support comprehensive protein structure prediction and structure-based annotation. PROTEUS2 accepts either single sequences (for directed studies) or multiple sequences (for whole proteome annotation) and predicts the secondary and, if possible, tertiary structure of the query protein(s). Unlike most other tools or servers, PROTEUS2 bundles signal peptide identification, transmembrane helix prediction, transmembrane beta-strand prediction, secondary structure prediction (for soluble proteins) and homology modeling (i.e. 3D structure generation) into a single prediction pipeline. Using a combination of progressive multi-sequence alignment, structure-based mapping, hidden Markov models, multi-component neural nets and up-to-date databases of known secondary structure assignments, PROTEUS is able to achieve among the highest reported levels of predictive accuracy for signal peptides (Q2 = 94%), membrane spanning helices (Q2 = 87%) and secondary structure (Q3 score of 81.3%). PROTEUS2's homology modeling services also provide high quality 3D models that compare favorably with those generated by SWISS-MODEL and 3D JigSaw (within 0.2 A RMSD). The average PROTEUS2 prediction takes approximately 3 min per query sequence. The PROTEUS2 server along with source code for many of its modules is accessible a http://wishart.biology.ualberta.ca/proteus2.

  19. Rtools: a web server for various secondary structural analyses on single RNA sequences

    PubMed Central

    Hamada, Michiaki; Ono, Yukiteru; Kiryu, Hisanori; Sato, Kengo; Kato, Yuki; Fukunaga, Tsukasa; Mori, Ryota; Asai, Kiyoshi

    2016-01-01

    The secondary structures, as well as the nucleotide sequences, are the important features of RNA molecules to characterize their functions. According to the thermodynamic model, however, the probability of any secondary structure is very small. As a consequence, any tool to predict the secondary structures of RNAs has limited accuracy. On the other hand, there are a few tools to compensate the imperfect predictions by calculating and visualizing the secondary structural information from RNA sequences. It is desirable to obtain the rich information from those tools through a friendly interface. We implemented a web server of the tools to predict secondary structures and to calculate various structural features based on the energy models of secondary structures. By just giving an RNA sequence to the web server, the user can get the different types of solutions of the secondary structures, the marginal probabilities such as base-paring probabilities, loop probabilities and accessibilities of the local bases, the energy changes by arbitrary base mutations as well as the measures for validations of the predicted secondary structures. The web server is available at http://rtools.cbrc.jp, which integrates software tools, CentroidFold, CentroidHomfold, IPKnot, CapR, Raccess, Rchange and RintD. PMID:27131356

  20. Pre-main-sequence isochrones - III. The Cluster Collaboration isochrone server

    NASA Astrophysics Data System (ADS)

    Bell, Cameron P. M.; Rees, Jon M.; Naylor, Tim; Mayne, N. J.; Jeffries, R. D.; Mamajek, Eric E.; Rowe, John

    2014-12-01

    We present an isochrone server for semi-empirical pre-main-sequence model isochrones in the following systems: Johnson-Cousins, Sloan Digital Sky Survey, Two-Micron All-Sky Survey, Isaac Newton Telescope (INT) Wide-Field Camera and INT Photometric Hα Survey (IPHAS)/UV-Excess Survey (UVEX). The server can be accessed via the Cluster Collaboration webpage http://www.astro.ex.ac.uk/people/timn/isochrones/. To achieve this, we have used the observed colours of member stars in young clusters with well-established age, distance and reddening to create fiducial loci in the colour-magnitude diagram. These empirical sequences have been used to quantify the discrepancy between the models and data arising from uncertainties in both the interior and atmospheric models, resulting in tables of semi-empirical bolometric corrections (BCs) in the various photometric systems. The model isochrones made available through the server are based on existing stellar interior models coupled with our newly derived semi-empirical BCs. As part of this analysis, we also present new cluster parameters for both the Pleiades and Praesepe, yielding ages of 135^{+20}_{-11} and 665^{+14}_{-7} {Myr} as well as distances of 132 ± 2 and 184 ± 2 pc, respectively (statistical uncertainty only).

  1. Flexible server architecture for resource-optimal presentation of Internet multimedia streams to the client

    NASA Astrophysics Data System (ADS)

    Boenisch, Holger; Froitzheim, Konrad

    1999-12-01

    The transfer of live media streams such as video and audio over the Internet is subject to several problems, static and dynamic by nature. Important quality of service (QoS) parameters do not only differ between various receivers depending on their network access, service provider, and nationality, the QoS is also variable in time. Moreover the installed receiver base is heterogeneous with respect to operating system, browser or client software, and browser version. We present a new concept for serving live media streams. It is not longer based on the current one-size-fits all paradigm, where the server offers just one stream. Our compresslet system takes the opposite approach: it builds media streams `to order' and `just in time'. Every client subscribing to a media stream uses a servlet loaded into the media server to generate a tailored data stream for his resources and constraints. The server is designed such that commonly used components for media streams are computed once. The compresslets use these prefabricated components, code additional data if necessary, and construct the data stream based on the dynamic available QoS and other client constraints. A client-specific encoding leads to resource- optimal presentation that is especially useful for the presentation of complex multimedia documents on a variety of output devices.

  2. PASTA 2.0: an improved server for protein aggregation prediction

    PubMed Central

    Walsh, Ian; Seno, Flavio; Tosatto, Silvio C.E.; Trovato, Antonio

    2014-01-01

    The formation of amyloid aggregates upon protein misfolding is related to several devastating degenerative diseases. The propensities of different protein sequences to aggregate into amyloids, how they are enhanced by pathogenic mutations, the presence of aggregation hot spots stabilizing pathological interactions, the establishing of cross-amyloid interactions between co-aggregating proteins, all rely at the molecular level on the stability of the amyloid cross-beta structure. Our redesigned server, PASTA 2.0, provides a versatile platform where all of these different features can be easily predicted on a genomic scale given input sequences. The server provides other pieces of information, such as intrinsic disorder and secondary structure predictions, that complement the aggregation data. The PASTA 2.0 energy function evaluates the stability of putative cross-beta pairings between different sequence stretches. It was re-derived on a larger dataset of globular protein domains. The resulting algorithm was benchmarked on comprehensive peptide and protein test sets, leading to improved, state-of-the-art results with more amyloid forming regions correctly detected at high specificity. The PASTA 2.0 server can be accessed at http://protein.bio.unipd.it/pasta2/. PMID:24848016

  3. QA-RecombineIt: a server for quality assessment and recombination of protein models

    PubMed Central

    Pawlowski, Marcin; Bogdanowicz, Albert; Bujnicki, Janusz M.

    2013-01-01

    QA-RecombineIt provides a web interface to assess the quality of protein 3D structure models and to improve the accuracy of models by merging fragments of multiple input models. QA-RecombineIt has been developed for protein modelers who are working on difficult problems, have a set of different homology models and/or de novo models (from methods such as I-TASSER or ROSETTA) and would like to obtain one consensus model that incorporates the best parts into one structure that is internally coherent. An advanced mode is also available, in which one can modify the operation of the fragment recombination algorithm by manually identifying individual fragments or entire models to recombine. Our method produces up to 100 models that are expected to be on the average more accurate than the starting models. Therefore, our server may be useful for crystallographic protein structure determination, where protein models are used for Molecular Replacement to solve the phase problem. To address the latter possibility, a special feature was added to the QA-RecombineIt server. The QA-RecombineIt server can be freely accessed at http://iimcb.genesilico.pl/qarecombineit/. PMID:23700309

  4. NAFlex: a web server for the study of nucleic acid flexibility

    PubMed Central

    Hospital, Adam; Faustino, Ignacio; Collepardo-Guevara, Rosana; González, Carlos; Gelpí, Josep Lluis; Orozco, Modesto

    2013-01-01

    We present NAFlex, a new web tool to study the flexibility of nucleic acids, either isolated or bound to other molecules. The server allows the user to incorporate structures from protein data banks, completing gaps and removing structural inconsistencies. It is also possible to define canonical (average or sequence-adapted) nucleic acid structures using a variety of predefined internal libraries, as well to create specific nucleic acid conformations from the sequence. The server offers a variety of methods to explore nucleic acid flexibility, such as a colorless wormlike-chain model, a base-pair resolution mesoscopic model and atomistic molecular dynamics simulations with a wide variety of protocols and force fields. The trajectories obtained by simulations, or imported externally, can be visualized and analyzed using a large number of tools, including standard Cartesian analysis, essential dynamics, helical analysis, local and global stiffness, energy decomposition, principal components and in silico NMR spectra. The server is accessible free of charge from the mmb.irbbarcelona.org/NAFlex webpage. PMID:23685436

  5. OPAAS: a web server for optimal, permuted, and other alternative alignments of protein structures.

    PubMed

    Shih, Edward S C; Gan, Ruei-chi R; Hwang, Ming-Jing

    2006-07-01

    The large number of experimentally determined protein 3D structures is a rich resource for studying protein function and evolution, and protein structure comparison (PSC) is a key method for such studies. When comparing two protein structures, almost all currently available PSC servers report a single and sequential (i.e. topological) alignment, whereas the existence of good alternative alignments, including those involving permutations (i.e. non-sequential or non-topological alignments), is well known. We have recently developed a novel PSC method that can detect alternative alignments of statistical significance (alignment similarity P-value <10(-5)), including structural permutations at all levels of complexity. OPAAS, the server of this PSC method freely accessible at our website (http://opaas.ibms.sinica.edu.tw), provides an easy-to-read hierarchical layout of output to display detailed information on all of the significant alternative alignments detected. Because these alternative alignments can offer a more complete picture on the structural, evolutionary and functional relationship between two proteins, OPAAS can be used in structural bioinformatics research to gain additional insight that is not readily provided by existing PSC servers.

  6. HotSpot Wizard: a web server for identification of hot spots in protein engineering.

    PubMed

    Pavelka, Antonin; Chovancova, Eva; Damborsky, Jiri

    2009-07-01

    HotSpot Wizard is a web server for automatic identification of 'hot spots' for engineering of substrate specificity, activity or enantioselectivity of enzymes and for annotation of protein structures. The web server implements the protein engineering protocol, which targets evolutionarily variable amino acid positions located in the active site or lining the access tunnels. The 'hot spots' for mutagenesis are selected through the integration of structural, functional and evolutionary information obtained from: (i) the databases RCSB PDB, UniProt, PDBSWS, Catalytic Site Atlas and nr NCBI and (ii) the tools CASTp, CAVER, BLAST, CD-HIT, MUSCLE and Rate4Site. The protein structure and e-mail address are the only obligatory inputs for the calculation. In the output, HotSpot Wizard lists annotated residues ordered by estimated mutability. The results of the analysis are mapped on the enzyme structure and visualized in the web browser using Jmol. The HotSpot Wizard server should be useful for protein engineers interested in exploring the structure of their favourite protein and for the design of mutations in site-directed mutagenesis and focused directed evolution experiments. HotSpot Wizard is available at http://loschmidt.chemi.muni.cz/hotspotwizard/.

  7. Evolution of a legacy system to a Web patient record server: leveraging investment while opening the system.

    PubMed

    Flanagan, J R; Chun, J; Wagner, J R

    1996-01-01

    A layered system is under development to enhance our legacy system as a backend in a WEB-enabled system. Each layer of the system has defined functionality, leverages the investment in the layer below, and follows the strategy of reducing support requirements for workstations. The mainframe system provides administrative integration of sub-systems, security, and the central data repository for most information. The second layer is a graphical user interface (GUI) to the system for Windows platforms. Support needs are limited by relying chiefly on X-terminals and application servers. The "Intranet" layer is a WEB Server building upon the second layer gateways to provide platform-independent access to selected information and images. The fourth layer, under evaluation, will extend access to the central data repository for Internet users of web browsers that support private-key/public-key encryption.

  8. Evolution of a legacy system to a Web patient record server: leveraging investment while opening the system.

    PubMed Central

    Flanagan, J. R.; Chun, J.; Wagner, J. R.

    1996-01-01

    A layered system is under development to enhance our legacy system as a backend in a WEB-enabled system. Each layer of the system has defined functionality, leverages the investment in the layer below, and follows the strategy of reducing support requirements for workstations. The mainframe system provides administrative integration of sub-systems, security, and the central data repository for most information. The second layer is a graphical user interface (GUI) to the system for Windows platforms. Support needs are limited by relying chiefly on X-terminals and application servers. The "Intranet" layer is a WEB Server building upon the second layer gateways to provide platform-independent access to selected information and images. The fourth layer, under evaluation, will extend access to the central data repository for Internet users of web browsers that support private-key/public-key encryption. PMID:8947740

  9. FarFetch--an Internet-based sequence entry server.

    PubMed

    Gilbert, W A

    1994-04-01

    This communication is to announce the availability of a network server for biological sequence database entries which will allow scientists to fetch entries in a desired format directly into their file store. This server will use TCP/IP protocols allowing any user with an Internet connection to participate. FarFetch will allow users to obtain sequence entries in a directly usable form as opposed to conventional e-mail based sequence retrievers. This server also differs from Gopher and WAIS servers in that the sequence entry is written into the user's file store in a format that is immediately usable. Clients for the OpenVMS, Unix and Macintosh operating systems have been written and are available via anonymous ftp. Development of MS-DOS and Windows clients is planned. There will be no usage fees associated with the server.

  10. Enhanced networked server management with random remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2003-08-01

    In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.

  11. Improvements to the NIST network time protocol servers

    NASA Astrophysics Data System (ADS)

    Levine, Judah

    2008-12-01

    The National Institute of Standards and Technology (NIST) operates 22 network time servers at various locations. These servers respond to requests for time in a number of different formats and provide time stamps that are directly traceable to the NIST atomic clock ensemble in Boulder. The link between the servers at locations outside of the NIST Boulder Laboratories and the atomic clock ensemble is provided by the Automated Computer Time Service (ACTS) system, which has a direct connection to the clock ensemble and which transmits time information over dial-up telephone lines with a two-way protocol to measure the transmission delay. I will discuss improvements to the ACTS servers and to the time servers themselves. These improvements have resulted in an improvement of almost an order of magnitude in the performance of the system.

  12. METAGENassist: a comprehensive web server for comparative metagenomics.

    PubMed

    Arndt, David; Xia, Jianguo; Liu, Yifeng; Zhou, You; Guo, An Chi; Cruz, Joseph A; Sinelnikov, Igor; Budwill, Karen; Nesbø, Camilla L; Wishart, David S

    2012-07-01

    With recent improvements in DNA sequencing and sample extraction techniques, the quantity and quality of metagenomic data are now growing exponentially. This abundance of richly annotated metagenomic data and bacterial census information has spawned a new branch of microbiology called comparative metagenomics. Comparative metagenomics involves the comparison of bacterial populations between different environmental samples, different culture conditions or different microbial hosts. However, in order to do comparative metagenomics, one typically requires a sophisticated knowledge of multivariate statistics and/or advanced software programming skills. To make comparative metagenomics more accessible to microbiologists, we have developed a freely accessible, easy-to-use web server for comparative metagenomic analysis called METAGENassist. Users can upload their bacterial census data from a wide variety of common formats, using either amplified 16S rRNA data or shotgun metagenomic data. Metadata concerning environmental, culture, or host conditions can also be uploaded. During the data upload process, METAGENassist also performs an automated taxonomic-to-phenotypic mapping. Phenotypic information covering nearly 20 functional categories such as GC content, genome size, oxygen requirements, energy sources and preferred temperature range is automatically generated from the taxonomic input data. Using this phenotypically enriched data, users can then perform a variety of multivariate and univariate data analyses including fold change analysis, t-tests, PCA, PLS-DA, clustering and classification. To facilitate data processing, users are guided through a step-by-step analysis workflow using a variety of menus, information hyperlinks and check boxes. METAGENassist also generates colorful, publication quality tables and graphs that can be downloaded and used directly in the preparation of scientific papers. METAGENassist is available at http://www.metagenassist.ca.

  13. Oceanotron, Scalable Server for Marine Observations

    NASA Astrophysics Data System (ADS)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  14. The CAD-score web server: contact area-based comparison of structures and interfaces of proteins, nucleic acids and their complexes.

    PubMed

    Olechnovič, Kliment; Venclovas, Ceslovas

    2014-07-01

    The Contact Area Difference score (CAD-score) web server provides a universal framework to compute and analyze discrepancies between different 3D structures of the same biological macromolecule or complex. The server accepts both single-subunit and multi-subunit structures and can handle all the major types of macromolecules (proteins, RNA, DNA and their complexes). It can perform numerical comparison of both structures and interfaces. In addition to entire structures and interfaces, the server can assess user-defined subsets. The CAD-score server performs both global and local numerical evaluations of structural differences between structures or interfaces. The results can be explored interactively using sortable tables of global scores, profiles of local errors, superimposed contact maps and 3D structure visualization. The web server could be used for tasks such as comparison of models with the native (reference) structure, comparison of X-ray structures of the same macromolecule obtained in different states (e.g. with and without a bound ligand), analysis of nuclear magnetic resonance (NMR) structural ensemble or structures obtained in the course of molecular dynamics simulation. The web server is freely accessible at: http://www.ibt.lt/bioinformatics/cad-score.

  15. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    PubMed

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  16. [A Terahertz Spectral Database Based on Browser/Server Technique].

    PubMed

    Zhang, Zhuo-yong; Song, Yue

    2015-09-01

    With the solution of key scientific and technical problems and development of instrumentation, the application of terahertz technology in various fields has been paid more and more attention. Owing to the unique characteristic advantages, terahertz technology has been showing a broad future in the fields of fast, non-damaging detections, as well as many other fields. Terahertz technology combined with other complementary methods can be used to cope with many difficult practical problems which could not be solved before. One of the critical points for further development of practical terahertz detection methods depends on a good and reliable terahertz spectral database. We developed a BS (browser/server) -based terahertz spectral database recently. We designed the main structure and main functions to fulfill practical requirements. The terahertz spectral database now includes more than 240 items, and the spectral information was collected based on three sources: (1) collection and citation from some other abroad terahertz spectral databases; (2) collected from published literatures; and (3) spectral data measured in our laboratory. The present paper introduced the basic structure and fundament functions of the terahertz spectral database developed in our laboratory. One of the key functions of this THz database is calculation of optical parameters. Some optical parameters including absorption coefficient, refractive index, etc. can be calculated based on the input THz time domain spectra. The other main functions and searching methods of the browser/server-based terahertz spectral database have been discussed. The database search system can provide users convenient functions including user registration, inquiry, displaying spectral figures and molecular structures, spectral matching, etc. The THz database system provides an on-line searching function for registered users. Registered users can compare the input THz spectrum with the spectra of database, according to

  17. TogoDoc Server/Client System: Smart Recommendation and Efficient Management of Life Science Literature

    PubMed Central

    Takagi, Toshihisa

    2010-01-01

    In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the “tsunami” of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom. PMID:21179453

  18. TogoDoc server/client system: smart recommendation and efficient management of life science literature.

    PubMed

    Iwasaki, Wataru; Yamamoto, Yasunori; Takagi, Toshihisa

    2010-12-13

    In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.

  19. STRAW: Species TRee Analysis Web server.

    PubMed

    Shaw, Timothy I; Ruan, Zheng; Glenn, Travis C; Liu, Liang

    2013-07-01

    The coalescent methods for species tree reconstruction are increasingly popular because they can accommodate coalescence and multilocus data sets. Herein, we present STRAW, a web server that offers workflows for reconstruction of phylogenies of species using three species tree methods-MP-EST, STAR and NJst. The input data are a collection of rooted gene trees (for STAR and MP-EST methods) or unrooted gene trees (for NJst). The output includes the estimated species tree, modified Robinson-Foulds distances between gene trees and the estimated species tree and visualization of trees to compare gene trees with the estimated species tree. The web sever is available at http://bioinformatics.publichealth.uga.edu/SpeciesTreeAnalysis/.

  20. Performance Evaluation of Virtualization Techniques for Control and Access of Storage Systems in Data Center Applications

    NASA Astrophysics Data System (ADS)

    Ahmadi, Mohammad Reza

    2013-09-01

    Virtualization is a new technology that creates virtual environments based on the existing physical resources. This article evaluates effect of virtualization techniques on control servers and access method in storage systems [1, 2]. In control server virtualization, we have presented a tile based evaluation based on heterogeneous workloads to compare several key parameters and demonstrate effectiveness of virtualization techniques. Moreover, we have evaluated the virtualized model using VMotion techniques and maximum consolidation. In access method, we have prepared three different scenarios using direct, semi-virtual, and virtual attachment models. We have evaluated the proposed models with several workloads including OLTP database, data streaming, file server, web server, etc. Results of evaluation for different criteria confirm that server virtualization technique has high throughput and CPU usage as well as good performance with noticeable agility. Also virtual technique is a successful alternative for accessing to the storage systems especially in large capacity systems. This technique can therefore be an effective solution for expansion of storage area and reduction of access time. Results of different evaluation and measurements demonstrate that the virtualization in control server and full virtual access provide better performance and more agility as well as more utilization in the systems and improve business continuity plan.

  1. Dynamic Server-Based KML Code Generator Method for Level-of-Detail Traversal of Geospatial Data

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory; Mixon, Brian; Linger, TIm

    2013-01-01

    Web-based geospatial client applications such as Google Earth and NASA World Wind must listen to data requests, access appropriate stored data, and compile a data response to the requesting client application. This process occurs repeatedly to support multiple client requests and application instances. Newer Web-based geospatial clients also provide user-interactive functionality that is dependent on fast and efficient server responses. With massively large datasets, server-client interaction can become severely impeded because the server must determine the best way to assemble data to meet the client applications request. In client applications such as Google Earth, the user interactively wanders through the data using visually guided panning and zooming actions. With these actions, the client application is continually issuing data requests to the server without knowledge of the server s data structure or extraction/assembly paradigm. A method for efficiently controlling the networked access of a Web-based geospatial browser to server-based datasets in particular, massively sized datasets has been developed. The method specifically uses the Keyhole Markup Language (KML), an Open Geospatial Consortium (OGS) standard used by Google Earth and other KML-compliant geospatial client applications. The innovation is based on establishing a dynamic cascading KML strategy that is initiated by a KML launch file provided by a data server host to a Google Earth or similar KMLcompliant geospatial client application user. Upon execution, the launch KML code issues a request for image data covering an initial geographic region. The server responds with the requested data along with subsequent dynamically generated KML code that directs the client application to make follow-on requests for higher level of detail (LOD) imagery to replace the initial imagery as the user navigates into the dataset. The approach provides an efficient data traversal path and mechanism that can be

  2. Secure Entanglement Distillation for Double-Server Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Fujii, Keisuke

    2013-07-01

    Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client’s input, output, and program. If the client interacts with only a single server, the client has to have some minimum quantum power, such as the ability of emitting randomly rotated single-qubit states or the ability of measuring states. If the client interacts with two servers who share Bell pairs but cannot communicate with each other, the client can be completely classical. For such a double-server scheme, two servers have to share clean Bell pairs, and therefore the entanglement distillation is necessary in a realistic noisy environment. In this Letter, we show that it is possible to perform entanglement distillation in the double-server scheme without degrading the security of blind quantum computing.

  3. Secure entanglement distillation for double-server blind quantum computation.

    PubMed

    Morimae, Tomoyuki; Fujii, Keisuke

    2013-07-12

    Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client's input, output, and program. If the client interacts with only a single server, the client has to have some minimum quantum power, such as the ability of emitting randomly rotated single-qubit states or the ability of measuring states. If the client interacts with two servers who share Bell pairs but cannot communicate with each other, the client can be completely classical. For such a double-server scheme, two servers have to share clean Bell pairs, and therefore the entanglement distillation is necessary in a realistic noisy environment. In this Letter, we show that it is possible to perform entanglement distillation in the double-server scheme without degrading the security of blind quantum computing.

  4. Understanding Customer Dissatisfaction with Underutilized Distributed File Servers

    NASA Technical Reports Server (NTRS)

    Riedel, Erik; Gibson, Garth

    1996-01-01

    An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.

  5. Accessibility Videos.

    PubMed

    Kurppa, Ari; Nordlund, Marika

    2016-01-01

    It can be difficult to understand accessibility, if you do not have the personal experience. The Accessibility Centre ESKE produced short videos which demonstrate the meaning of accessibility in different situations. Videos will raise accessibility awareness of architects, other planners and professionals in the construction field and maintenance. PMID:27534282

  6. "Just Another Tool for Online Studies" (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies.

    PubMed

    Lange, Kristian; Kühn, Simone; Filevich, Elisa

    2015-01-01

    We present here "Just Another Tool for Online Studies" (JATOS): an open source, cross-platform web application with a graphical user interface (GUI) that greatly simplifies setting up and communicating with a web server to host online studies that are written in JavaScript. JATOS is easy to install in all three major platforms (Microsoft Windows, Mac OS X, and Linux), and seamlessly pairs with a database for secure data storage. It can be installed on a server or locally, allowing researchers to try the application and feasibility of their studies within a browser environment, before engaging in setting up a server. All communication with the JATOS server takes place via a GUI (with no need to use a command line interface), making JATOS an especially accessible tool for researchers without a strong IT background. We describe JATOS' main features and implementation and provide a detailed tutorial along with example studies to help interested researchers to set up their online studies. JATOS can be found under the Internet address: www.jatos.org.

  7. An Improvement of Robust Biometrics-Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards.

    PubMed

    Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho

    2015-01-01

    In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user's management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.'s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.'s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.'s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties. PMID:26709702

  8. An Improvement of Robust Biometrics-Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards.

    PubMed

    Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho

    2015-01-01

    In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user's management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.'s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.'s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.'s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties.

  9. An Improvement of Robust Biometrics-Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards

    PubMed Central

    Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho

    2015-01-01

    In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user’s management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.’s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.’s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.’s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties. PMID:26709702

  10. [Image data bases and multimedia works on server and CD-ROM in medical imaging. A French experience].

    PubMed

    Duvauferrier, R; Rambeau, M; André, M; Denier, P; Le Beux, P; Coussement, A; Caillé, J M; Robache, P; Morcet, N

    1995-12-01

    The CD-ROM technology allows the production of multimedia works which costs far less than books do. The creation of Internet and the servers World Wide Web has the advantage of distributing those works world wide, without the difficulties and the delays related to books and magazines distribution. The Teachers' Council of Radiology of France (CERF) and the French Society of Radiology (SFR) have opted to use these new media and these information highways to spread a part of their radiology teaching work. Iconocerf is a software program which allows to create, store, read and to exchange digitized radiological cases. It's available free of charge, within the CERF and SFR. The CD-ROMs Iconocerf-Medimag contain 3,500 radiological files with 15,000 images, previously on the videodisc Medimag. The Server of the French Radiology is a W3 server which includes: the CERF directory, a guide for the teachers, the research workers and the students in Radiology and Medical Imaging. It also contains the teaching works on Radiology, and some Iconocerf clinical cases translated onto HTML. The aim of this project is to create an evaluation system for radiology. By using key words, this system allows to consult: radiological clinical cases, located on the server or on CD-ROMs; reference texts; and to have access to the experts' addresses to be able to send them eventually a difficult case through electronic mail. PMID:8676295

  11. "Just Another Tool for Online Studies" (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies.

    PubMed

    Lange, Kristian; Kühn, Simone; Filevich, Elisa

    2015-01-01

    We present here "Just Another Tool for Online Studies" (JATOS): an open source, cross-platform web application with a graphical user interface (GUI) that greatly simplifies setting up and communicating with a web server to host online studies that are written in JavaScript. JATOS is easy to install in all three major platforms (Microsoft Windows, Mac OS X, and Linux), and seamlessly pairs with a database for secure data storage. It can be installed on a server or locally, allowing researchers to try the application and feasibility of their studies within a browser environment, before engaging in setting up a server. All communication with the JATOS server takes place via a GUI (with no need to use a command line interface), making JATOS an especially accessible tool for researchers without a strong IT background. We describe JATOS' main features and implementation and provide a detailed tutorial along with example studies to help interested researchers to set up their online studies. JATOS can be found under the Internet address: www.jatos.org. PMID:26114751

  12. Design and Analysis of an Enhanced Patient-Server Mutual Authentication Protocol for Telecare Medical Information System.

    PubMed

    Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Obaidat, Mohammad S

    2015-11-01

    In order to access remote medical server, generally the patients utilize smart card to login to the server. It has been observed that most of the user (patient) authentication protocols suffer from smart card stolen attack that means the attacker can mount several common attacks after extracting smart card information. Recently, Lu et al.'s proposes a session key agreement protocol between the patient and remote medical server and claims that the same protocol is secure against relevant security attacks. However, this paper presents several security attacks on Lu et al.'s protocol such as identity trace attack, new smart card issue attack, patient impersonation attack and medical server impersonation attack. In order to fix the mentioned security pitfalls including smart card stolen attack, this paper proposes an efficient remote mutual authentication protocol using smart card. We have then simulated the proposed protocol using widely-accepted AVISPA simulation tool whose results make certain that the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. Moreover, the rigorous security analysis proves that the proposed protocol provides strong security protection on the relevant security attacks including smart card stolen attack. We compare the proposed scheme with several related schemes in terms of computation cost and communication cost as well as security functionalities. It has been observed that the proposed scheme is comparatively better than related existing schemes.

  13. "Just Another Tool for Online Studies” (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies

    PubMed Central

    Lange, Kristian; Kühn, Simone; Filevich, Elisa

    2015-01-01

    We present here “Just Another Tool for Online Studies” (JATOS): an open source, cross-platform web application with a graphical user interface (GUI) that greatly simplifies setting up and communicating with a web server to host online studies that are written in JavaScript. JATOS is easy to install in all three major platforms (Microsoft Windows, Mac OS X, and Linux), and seamlessly pairs with a database for secure data storage. It can be installed on a server or locally, allowing researchers to try the application and feasibility of their studies within a browser environment, before engaging in setting up a server. All communication with the JATOS server takes place via a GUI (with no need to use a command line interface), making JATOS an especially accessible tool for researchers without a strong IT background. We describe JATOS’ main features and implementation and provide a detailed tutorial along with example studies to help interested researchers to set up their online studies. JATOS can be found under the Internet address: www.jatos.org. PMID:26114751

  14. Design and Analysis of an Enhanced Patient-Server Mutual Authentication Protocol for Telecare Medical Information System.

    PubMed

    Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Obaidat, Mohammad S

    2015-11-01

    In order to access remote medical server, generally the patients utilize smart card to login to the server. It has been observed that most of the user (patient) authentication protocols suffer from smart card stolen attack that means the attacker can mount several common attacks after extracting smart card information. Recently, Lu et al.'s proposes a session key agreement protocol between the patient and remote medical server and claims that the same protocol is secure against relevant security attacks. However, this paper presents several security attacks on Lu et al.'s protocol such as identity trace attack, new smart card issue attack, patient impersonation attack and medical server impersonation attack. In order to fix the mentioned security pitfalls including smart card stolen attack, this paper proposes an efficient remote mutual authentication protocol using smart card. We have then simulated the proposed protocol using widely-accepted AVISPA simulation tool whose results make certain that the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. Moreover, the rigorous security analysis proves that the proposed protocol provides strong security protection on the relevant security attacks including smart card stolen attack. We compare the proposed scheme with several related schemes in terms of computation cost and communication cost as well as security functionalities. It has been observed that the proposed scheme is comparatively better than related existing schemes. PMID:26324169

  15. Secure Web-Site Access with Tickets and Message-Dependent Digests

    USGS Publications Warehouse

    Donato, David I.

    2008-01-01

    Although there are various methods for restricting access to documents stored on a World Wide Web (WWW) site (a Web site), none of the widely used methods is completely suitable for restricting access to Web applications hosted on an otherwise publicly accessible Web site. A new technique, however, provides a mix of features well suited for restricting Web-site or Web-application access to authorized users, including the following: secure user authentication, tamper-resistant sessions, simple access to user state variables by server-side applications, and clean session terminations. This technique, called message-dependent digests with tickets, or MDDT, maintains secure user sessions by passing single-use nonces (tickets) and message-dependent digests of user credentials back and forth between client and server. Appendix 2 provides a working implementation of MDDT with PHP server-side code and JavaScript client-side code.

  16. UNIX based client/server hospital information system.

    PubMed

    Nakamura, S; Sakurai, K; Uchiyama, M; Yoshii, Y; Tachibana, N

    1995-01-01

    SMILE (St. Luke's Medical Center Information Linkage Environment) is a HIS which is a client/server system using a UNIX workstation under an open network, LAN(FDDI&10BASE-T). It provides a multivendor environment, high performance with low cost and a user-friendly GUI. However, the client/server architecture with a UNIX workstation does not have the same OLTP environment (ex. TP monor) as the mainframe. So, our system problems and the steps used to solve them were reviewed. Several points that are necessary for a client/server system with a UNIX workstation in the future are presented.

  17. PROMALS web server for accurate multiple protein sequence alignments.

    PubMed

    Pei, Jimin; Kim, Bong-Hyun; Tang, Ming; Grishin, Nick V

    2007-07-01

    Multiple sequence alignments are essential in homology inference, structure modeling, functional prediction and phylogenetic analysis. We developed a web server that constructs multiple protein sequence alignments using PROMALS, a progressive method that improves alignment quality by using additional homologs from PSI-BLAST searches and secondary structure predictions from PSIPRED. PROMALS shows higher alignment accuracy than other advanced methods, such as MUMMALS, ProbCons, MAFFT and SPEM. The PROMALS web server takes FASTA format protein sequences as input. The output includes a colored alignment augmented with information about sequence grouping, predicted secondary structures and positional conservation. The PROMALS web server is available at: http://prodata.swmed.edu/promals/ PMID:17452345

  18. Sausalito: An Application Servers for RESTful Services in the Cloud

    NASA Astrophysics Data System (ADS)

    Brantner, Matthias

    This talk argues that Web Server, Application Server, and Database System should be bundled into a single system for development and deployment of Web-based applications in the cloud. Furthermore, this talk argues that the whole system should serve REST services and should behave like a REST service itself. The design and implementation of Sausalito is presented which is a combined Web, Application, and Database server that operates on top of Amazon’s cloud offerings. Furthermore, a demo of several example applications is given that show the advantages of the approach taken by Sausalito (see http://sausalito.28msec.com/).

  19. [Server World-Wide Web on the Internet for the provision of clinical cases and digital radiologic images for training and continuing education in radiology].

    PubMed

    Sparacia, G; Tartamella, M; Finazzo, M; Bartolotta, T; Brancatelli, G; Banco, A; Lo Casto, A; La Tona, G; Bentivegna, E

    1997-06-01

    The Internet, as a global computer network, provides opportunities to make available multimedia educational materials, such as teaching files and image databases, that can be accessed using "World-Wide Web" client browser to provide continuing medical education. Since August, 1995, at the Institute of Radiology-University of Palermo, we developed a World-Wide Web server on the Internet to provide a collection of interactive radiology educational resources such as teaching files and image database for continuing medical education in radiology. Our server is based on a UNIX workstation connected to the Internet via our campus Ethernet network and reachable at the uniform resource locator (URL) address: http:/(/)mbox.unipa.it/approximately radpa/ radpa.html. Digital CT and MR images for teaching files and image database are downloaded through an Ethernet local area network from a GE Advantage Windows workstation. US images will be acquired on-line through a video digitizing board. Radiographs will be digitized by means of a Charge Coupled Device (CCD) scanner. To set up teaching files, image database and all other documents, we use the standard "HyperText Markup Language" (HTML) to edit the documents, and the Graphics Interchange Format (GIF) or Joint Photographic Expert Group (JPEG) format to store the images. Nine teaching files are presently available on the server, together with 49 images in the database, a list of international radiological servers, a section devoted to the museum of radiology hosted by our Institute, the electronic version of the Journal Eido Electa. In the first 12 months of public access through the Internet, 12,280 users accessed the server worldwide: 45% of them to retrieve teaching files; 35% to retrieve images from the database; the remaining 20% to retrieve other documents. Placing teaching files and image database on a World-Wide Web server makes these cases more available to residents and radiologists to provide continuing medical

  20. Access Control of Web- and Java-Based Applications

    NASA Technical Reports Server (NTRS)

    Tso, Kam S.; Pajevski, Michael J.

    2013-01-01

    Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers

  1. Automated Computer Access Request System

    NASA Technical Reports Server (NTRS)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  2. VLDP web server: a powerful geometric tool for analysing protein structures in their environment.

    PubMed

    Esque, Jérémy; Léonard, Sylvain; de Brevern, Alexandre G; Oguey, Christophe

    2013-07-01

    Protein structures are an ensemble of atoms determined experimentally mostly by X-ray crystallography or Nuclear Magnetic Resonance. Studying 3D protein structures is a key point for better understanding protein function at a molecular level. We propose a set of accurate tools, for analysing protein structures, based on the reliable method of Voronoi-Laguerre tessellations. The Voronoi Laguerre Delaunay Protein web server (VLDPws) computes the Laguerre tessellation on a whole given system first embedded in solvent. Through this fine description, VLDPws gives the following data: (i) Amino acid volumes evaluated with high precision, as confirmed by good correlations with experimental data. (ii) A novel definition of inter-residue contacts within the given protein. (iii) A measure of the residue exposure to solvent that significantly improves the standard notion of accessibility in some cases. At present, no equivalent web server is available. VLDPws provides output in two complementary forms: direct visualization of the Laguerre tessellation, mostly its polygonal molecular surfaces; files of volumes; and areas, contacts and similar data for each residue and each atom. These files are available for download for further analysis. VLDPws can be accessed at http://www.dsimb.inserm.fr/dsimb_tools/vldp. PMID:23761450

  3. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE PAGES

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  4. FAF-Drugs3: a web server for compound property calculation and chemical library design.

    PubMed

    Lagorce, David; Sperandio, Olivier; Baell, Jonathan B; Miteva, Maria A; Villoutreix, Bruno O

    2015-07-01

    Drug attrition late in preclinical or clinical development is a serious economic problem in the field of drug discovery. These problems can be linked, in part, to the quality of the compound collections used during the hit generation stage and to the selection of compounds undergoing optimization. Here, we present FAF-Drugs3, a web server that can be used for drug discovery and chemical biology projects to help in preparing compound libraries and to assist decision-making during the hit selection/lead optimization phase. Since it was first described in 2006, FAF-Drugs has been significantly modified. The tool now applies an enhanced structure curation procedure, can filter or analyze molecules with user-defined or eight predefined physicochemical filters as well as with several simple ADMET (absorption, distribution, metabolism, excretion and toxicity) rules. In addition, compounds can be filtered using an updated list of 154 hand-curated structural alerts while Pan Assay Interference compounds (PAINS) and other, generally unwanted groups are also investigated. FAF-Drugs3 offers access to user-friendly html result pages and the possibility to download all computed data. The server requires as input an SDF file of the compounds; it is open to all users and can be accessed without registration at http://fafdrugs3.mti.univ-paris-diderot.fr.

  5. VLDP web server: a powerful geometric tool for analysing protein structures in their environment.

    PubMed

    Esque, Jérémy; Léonard, Sylvain; de Brevern, Alexandre G; Oguey, Christophe

    2013-07-01

    Protein structures are an ensemble of atoms determined experimentally mostly by X-ray crystallography or Nuclear Magnetic Resonance. Studying 3D protein structures is a key point for better understanding protein function at a molecular level. We propose a set of accurate tools, for analysing protein structures, based on the reliable method of Voronoi-Laguerre tessellations. The Voronoi Laguerre Delaunay Protein web server (VLDPws) computes the Laguerre tessellation on a whole given system first embedded in solvent. Through this fine description, VLDPws gives the following data: (i) Amino acid volumes evaluated with high precision, as confirmed by good correlations with experimental data. (ii) A novel definition of inter-residue contacts within the given protein. (iii) A measure of the residue exposure to solvent that significantly improves the standard notion of accessibility in some cases. At present, no equivalent web server is available. VLDPws provides output in two complementary forms: direct visualization of the Laguerre tessellation, mostly its polygonal molecular surfaces; files of volumes; and areas, contacts and similar data for each residue and each atom. These files are available for download for further analysis. VLDPws can be accessed at http://www.dsimb.inserm.fr/dsimb_tools/vldp.

  6. FAF-Drugs3: a web server for compound property calculation and chemical library design

    PubMed Central

    Lagorce, David; Sperandio, Olivier; Baell, Jonathan B.; Miteva, Maria A.; Villoutreix, Bruno O.

    2015-01-01

    Drug attrition late in preclinical or clinical development is a serious economic problem in the field of drug discovery. These problems can be linked, in part, to the quality of the compound collections used during the hit generation stage and to the selection of compounds undergoing optimization. Here, we present FAF-Drugs3, a web server that can be used for drug discovery and chemical biology projects to help in preparing compound libraries and to assist decision-making during the hit selection/lead optimization phase. Since it was first described in 2006, FAF-Drugs has been significantly modified. The tool now applies an enhanced structure curation procedure, can filter or analyze molecules with user-defined or eight predefined physicochemical filters as well as with several simple ADMET (absorption, distribution, metabolism, excretion and toxicity) rules. In addition, compounds can be filtered using an updated list of 154 hand-curated structural alerts while Pan Assay Interference compounds (PAINS) and other, generally unwanted groups are also investigated. FAF-Drugs3 offers access to user-friendly html result pages and the possibility to download all computed data. The server requires as input an SDF file of the compounds; it is open to all users and can be accessed without registration at http://fafdrugs3.mti.univ-paris-diderot.fr. PMID:25883137

  7. SIFTER search: a web server for accurate phylogeny-based protein function prediction.

    PubMed

    Sahraeian, Sayed M; Luo, Kevin R; Brenner, Steven E

    2015-07-01

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded. PMID:25979264

  8. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server

    PubMed Central

    Cannone, Jamie J.; Sweeney, Blake A.; Petrov, Anton I.; Gutell, Robin R.; Zirbel, Craig L.; Leontis, Neocles

    2015-01-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  9. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    SciTech Connect

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.

  10. PhenoHM: human–mouse comparative phenome–genome server

    PubMed Central

    Sardana, Divya; Vasa, Suresh; Vepachedu, Nishanth; Chen, Jing; Gudivada, Ranga Chandra; Aronow, Bruce J.; Jegga, Anil G.

    2010-01-01

    PhenoHM is a human–mouse comparative phenome–genome server that facilitates cross-species identification of genes associated with orthologous phenotypes (http://phenome.cchmc.org; full open access, login not required). Combining and extrapolating the knowledge about the roles of individual gene functions in the determination of phenotype across multiple organisms improves our understanding of gene function in normal and perturbed states and offers the opportunity to complement biologically the rapidly expanding strategies in comparative genomics. The Mammalian Phenotype Ontology (MPO), a structured vocabulary of phenotype terms that leverages observations encompassing the consequences of mouse gene knockout studies, is a principal component of mouse phenotype knowledge source. On the other hand, the Unified Medical Language System (UMLS) is a composite collection of various human-centered biomedical terminologies. In the present study, we mapped terms reciprocally from the MPO to human disease concepts such as clinical findings from the UMLS and clinical phenotypes from the Online Mendelian Inheritance in Man knowledgebase. By cross-mapping mouse–human phenotype terms, extracting implicated genes and extrapolating phenotype-gene associations between species PhenoHM provides a resource that enables rapid identification of genes that trigger similar outcomes in human and mouse and facilitates identification of potentially novel disease causal genes. The PhenoHM server can be accessed freely at http://phenome.cchmc.org. PMID:20507906

  11. MY NASA DATA: Making Earth Science Data Accessible to the K-12 Community

    NASA Astrophysics Data System (ADS)

    Chambers, L. H.; Alston, E. J.; Diones, D. D.; Moore, S. W.; Oots, P. C.; Phelps, C. S.

    2006-12-01

    In 2004, the Mentoring and inquirY using NASA Data on Atmospheric and Earth science for Teachers and Amateurs (MY NASA DATA) project began. The goal of this project is to enable K-12 and citizen science communities to make use of the large volume of Earth System Science data that NASA has collected and archived. One major outcome is to allow students to select a problem of real-life importance, and to explore it using high quality data sources without spending months looking for and then learning how to use a dataset. The key element of the MY NASA DATA project is the implementation of a Live Access Server (LAS). The LAS is an open source software tool, developed by NOAA, that provides access to a variety of data sources through a single, fairly simple, point- and- click interface. This tool truly enables use of the available data - more than 100 parameters are offered so far - in an inquiry-based educational setting. It readily gives students the opportunity to browse images for times and places they define, and also provides direct access to the underlying data values - a key feature of this educational effort. The team quickly discovered, however, that even a simple and fairly intuitive tool is not enough to make most teachers comfortable with data exploration. User feedback has led us to create a friendly LAS Introduction page, which uses the analogy of a restaurant to explain to our audience the basic concept of an LAS. In addition, we have created a "Time Coverage at a Glance" chart to show what data are available when. This keeps our audience from being too confused by the patchwork of data availability caused by the start and end of individual missions. Finally, we have found it necessary to develop a substantial amount of age appropriate documentation, including topical pages and a science glossary, to help our audience understand the parameters they are exploring and how these parameters fit into the larger picture of Earth System Science. MY NASA DATA

  12. REPK: an analytical web server to select restriction endonucleases for terminal restriction fragment length polymorphism analysis.

    PubMed

    Collins, Roy Eric; Rocap, Gabrielle

    2007-07-01

    Terminal restriction fragment length polymorphism (T-RFLP) analysis is a widespread technique for rapidly fingerprinting microbial communities. Users of T-RFLP frequently overlook the resolving power of well-chosen restriction endonucleases and often fail to report how they chose their enzymes. REPK (Restriction Endonuclease Picker) assists in the rational choice of restriction endonucleases for T-RFLP by finding sets of four restriction endonucleases that together uniquely differentiate user-designated sequence groups. With REPK, users can provide their own sequences (of any gene, not just 16S rRNA), specify the taxonomic rank of interest and choose from a number of filtering options to further narrow down the enzyme selection. Bug tracking is provided, and the source code is open and accessible under the GNU Public License v.2, at http://code.google.com/p/repk. The web server is available without access restrictions at http://rocaplab.ocean.washington.edu/tools/repk.

  13. AsteriX: a Web server to automatically extract ligand coordinates from figures in PDF articles.

    PubMed

    Lounnas, V; Vriend, G

    2012-02-27

    Coordinates describing the chemical structures of small molecules that are potential ligands for pharmaceutical targets are used at many stages of the drug design process. The coordinates of the vast majority of ligands can be obtained from either publicly accessible or commercial databases. However, interesting ligands sometimes are only available from the scientific literature, in which case their coordinates need to be reconstructed manually--a process that consists of a series of time-consuming steps. We present a Web server that helps reconstruct the three-dimensional (3D) coordinates of ligands for which a two-dimensional (2D) picture is available in a PDF file. The software, called AsteriX, analyses every picture contained in the PDF file and attempts to determine automatically whether or not it contains ligands. Areas in pictures that may contain molecular structures are processed to extract connectivity and atom type information that allow coordinates to be subsequently reconstructed. The AsteriX Web server was tested on a series of articles containing a large diversity in graphical representations. In total, 88% of 3249 ligand structures present in the test set were identified as chemical diagrams. Of these, about half were interpreted correctly as 3D structures, and a further one-third required only minor manual corrections. It is principally impossible to always correctly reconstruct 3D coordinates from pictures because there are many different protocols for drawing a 2D image of a ligand, but more importantly a wide variety of semantic annotations are possible. The AsteriX Web server therefore includes facilities that allow the users to augment partial or partially correct 3D reconstructions. All 3D reconstructions are submitted, checked, and corrected by the users domain at the server and are freely available for everybody. The coordinates of the reconstructed ligands are made available in a series of formats commonly used in drug design research. The

  14. How to secure your servers, code and data

    ScienceCinema

    None

    2016-07-12

    Oral presentation in English, slides in English. Advice and best practices regarding the security of your servers, code and data will be presented. We will also describe how the Computer Security Team can help you reduce the risks.

  15. Reviews of computing technology: Client-server technology

    SciTech Connect

    Johnson, S.M.

    1990-09-01

    One of the most frequently heard terms in the computer industry these days is ``client-server.`` There is much misinformation available on the topic, and competitive pressures on software vendors have led to a great deal of hype with little in the way of supporting products. The purpose of this document is to explain what is meant by client-server applications, why the Advanced Technology and Architecture (ATA) section of the Information Resources Management (IRM) Department sees this emerging technology as key for computer applications during the next ten years, and what ATA sees as the existing standards and products available today. Because of the relative immaturity of existing client-server products, IRM is not yet guidelining any specific client-server products, except those that are components of guidelined data communications products or database management systems.

  16. Reviews of computing technology: Client-server technology

    SciTech Connect

    Johnson, S.M.

    1990-09-01

    One of the most frequently heard terms in the computer industry these days is client-server.'' There is much misinformation available on the topic, and competitive pressures on software vendors have led to a great deal of hype with little in the way of supporting products. The purpose of this document is to explain what is meant by client-server applications, why the Advanced Technology and Architecture (ATA) section of the Information Resources Management (IRM) Department sees this emerging technology as key for computer applications during the next ten years, and what ATA sees as the existing standards and products available today. Because of the relative immaturity of existing client-server products, IRM is not yet guidelining any specific client-server products, except those that are components of guidelined data communications products or database management systems.

  17. Building a Library Web Server on a Budget.

    ERIC Educational Resources Information Center

    Orr, Giles

    1998-01-01

    Presents a method for libraries with limited budgets to create reliable Web servers with existing hardware and free software available via the Internet. Discusses staff, hardware and software requirements, and security; outlines the assembly process. (PEN)

  18. How to secure your servers, code and data

    SciTech Connect

    2010-06-24

    Oral presentation in English, slides in English. Advice and best practices regarding the security of your servers, code and data will be presented. We will also describe how the Computer Security Team can help you reduce the risks.

  19. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    SciTech Connect

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  20. Tiled WMS/KML Server V2

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2012-01-01

    This software is a higher-performance implementation of tiled WMS, with integral support for KML and time-varying data. This software is compliant with the Open Geospatial WMS standard, and supports KML natively as a WMS return type, including support for the time attribute. Regionated KML wrappers are generated that match the existing tiled WMS dataset. Ping and JPG formats are supported, and the software is implemented as an Apache 2.0 module that supports a threading execution model that is capable of supporting very high request rates. The module intercepts and responds to WMS requests that match certain patterns and returns the existing tiles. If a KML format that matches an existing pyramid and tile dataset is requested, regionated KML is generated and returned to the requesting application. In addition, KML requests that do not match the existing tile datasets generate a KML response that includes the corresponding JPG WMS request, effectively adding KML support to a backing WMS server.

  1. Anonymization server system for DICOM images

    NASA Astrophysics Data System (ADS)

    Suzuki, H.; Amano, M.; Kubo, M.; Kawata, Y.; Niki, N.; Nishitani, H.

    2007-03-01

    We have developed an anonymization system for DICOM images. It requires consent from the patient to use the DICOM images for research or education. However, providing the DICOM image to the other facilities is not safe because it contains a lot of personal data. Our system is a server that provides anonymization service of DICOM images for users in the facility. The distinctive features of the system are, input interface, flexible anonymization policy, and automatic body part identification. In the first feature, we can use the anonymization service on the existing DICOM workstations. In the second feature, we can select a best policy fitting for the Protection of personal data that is ruled by each medical facility. In the third feature, we can identify the body parts that are included in the input image set, even if the set lacks the body part tag in DICOM header. We installed the system for the first time to a hospital in December 2005. Currently, the system is working in other four facilities. In this paper we describe the system and how it works.

  2. Web Server Security on Open Source Environments

    NASA Astrophysics Data System (ADS)

    Gkoutzelis, Dimitrios X.; Sardis, Manolis S.

    Administering critical resources has never been more difficult that it is today. In a changing world of software innovation where major changes occur on a daily basis, it is crucial for the webmasters and server administrators to shield their data against an unknown arsenal of attacks in the hands of their attackers. Up until now this kind of defense was a privilege of the few, out-budgeted and low cost solutions let the defender vulnerable to the uprising of innovating attacking methods. Luckily, the digital revolution of the past decade left its mark, changing the way we face security forever: open source infrastructure today covers all the prerequisites for a secure web environment in a way we could never imagine fifteen years ago. Online security of large corporations, military and government bodies is more and more handled by open source application thus driving the technological trend of the 21st century in adopting open solutions to E-Commerce and privacy issues. This paper describes substantial security precautions in facing privacy and authentication issues in a totally open source web environment. Our goal is to state and face the most known problems in data handling and consequently propose the most appealing techniques to face these challenges through an open solution.

  3. EarthServer: an Intercontinental Collaboration on Petascale Datacubes

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Rossi, A. P.

    2015-12-01

    With the unprecedented increase of orbital sensor, in-situ measurement, and simulation data there is a rich, yet not leveraged potential for getting insights from dissecting datasets and rejoining them with other datasets. Obviously, the goal is to allow users to "ask any question, any time" thereby enabling them to "build their own product on the go".One of the most influential initiatives in Big Geo Data is EarthServer which has demonstrated new directions for flexible, scalable EO services based on innovative NewSQL technology. Researchers from Europe, the US and recently Australia have teamed up to rigourously materialize the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users will always see just a few datacubes they can slice and dice. EarthServer has established client and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman, enables direct interaction, including 3-D visualization, what-if scenarios, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS) including the Web Coverage Processing Service (WCPS). Conversely, EarthServer has significantly shaped and advanced the OGC Big Geo Data standards landscape based on the experience gained.Phase 1 of EarthServer has advanced scalable array database technology into 100+ TB services; in phase 2, Petabyte datacubes will be built in Europe and Australia to perform ad-hoc querying and merging. Standing between EarthServer phase 1 (from 2011 through 2014) and phase 2 (from 2015 through 2018) we present the main results and outline the impact on the international standards landscape; effectively, the Big Geo Data standards established through initiative of

  4. An Array Library for Microsoft SQL Server with Astrophysical Applications

    NASA Astrophysics Data System (ADS)

    Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.

    2012-09-01

    Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory

  5. Web-based Access to Locally Developed Databases.

    ERIC Educational Resources Information Center

    Mischo, William H.; Schlembach, Mary C.

    1999-01-01

    Describes the Web-based technologies employed by the Grainger Engineering Library Information Center at the University of Illinois, Urbana-Champaign in implementing access to local information resources. Discusses Microsoft Active Server Pages (ASP) technologies and the associated local database structure and format, as well as the general…

  6. File Access Optimization with the Lustre Filesystem at Florida CMS T2

    NASA Astrophysics Data System (ADS)

    Avery, P.; Bourilkov, D.; Fu, Y.; Kim, B.

    2015-12-01

    The Florida CMS Tier2 center, one of the CMS Tier2 centers, has been using the Lustre filesystem for its data storage backend system since 2004. Recently, the data access pattern at our site has changed greatly due to various new access methods that include file transfers through the GridFTP servers, read access from the worker nodes, and the remote read access through the xrootd servers. In order to optimize the file access performance, we have to consider all the possible access patterns and each pattern needs to be studied separately. In this presentation, we report on our work to optimize file access with the Lustre filesystem at the Florida CMS T2 using an approach based on analyzing these access patterns.

  7. Client-server, distributed database strategies in a health-care record system for a homeless population.

    PubMed Central

    Chueh, H C; Barnett, G O

    1994-01-01

    OBJECTIVE: To design and develop a computer-based health-care record system to address the needs of the patients and providers of a homeless population. DESIGN: A computer-based health-care record system being developed for Boston's Healthcare for the Homeless Program (BHCHP) uses client-server technology and distributed database strategies to provide a common medical record for this transient population. The differing information requirements of physicians, nurses, and social workers are specifically addressed in the graphic application interface to facilitate an integrated approach to health care. This computer-based record system is designed for remote and portable use to integrate smoothly into the daily practice of providers of care to the homeless. The system uses remote networking technology and regular phone lines to support multiple concurrent users at remote sites of care. RESULTS: A stand-alone, pilot system is in operation at the BHCHP medical respite unit. Information on 129 patient encounters from 37 unique sites has been entered. A full client-server system has been designed. Benchmarks show that while the relative performance of a communication link based upon a phone line is 0.07 to 0.15 that of a local area network, optimization permits adequate response. CONCLUSION: Medical records access in a transient population poses special problems. Use of client-server and distributed database strategies can provide a technical foundation that provides a secure, reliable, and accessible computer-based medical record in this environment. PMID:7719799

  8. Miniaturized Airborne Imaging Central Server System

    NASA Technical Reports Server (NTRS)

    Sun, Xiuhong

    2011-01-01

    In recent years, some remote-sensing applications require advanced airborne multi-sensor systems to provide high performance reflective and emissive spectral imaging measurement rapidly over large areas. The key or unique problem of characteristics is associated with a black box back-end system that operates a suite of cutting-edge imaging sensors to collect simultaneously the high throughput reflective and emissive spectral imaging data with precision georeference. This back-end system needs to be portable, easy-to-use, and reliable with advanced onboard processing. The innovation of the black box backend is a miniaturized airborne imaging central server system (MAICSS). MAICSS integrates a complex embedded system of systems with dedicated power and signal electronic circuits inside to serve a suite of configurable cutting-edge electro- optical (EO), long-wave infrared (LWIR), and medium-wave infrared (MWIR) cameras, a hyperspectral imaging scanner, and a GPS and inertial measurement unit (IMU) for atmospheric and surface remote sensing. Its compatible sensor packages include NASA s 1,024 1,024 pixel LWIR quantum well infrared photodetector (QWIP) imager; a 60.5 megapixel BuckEye EO camera; and a fast (e.g. 200+ scanlines/s) and wide swath-width (e.g., 1,920+ pixels) CCD/InGaAs imager-based visible/near infrared reflectance (VNIR) and shortwave infrared (SWIR) imaging spectrometer. MAICSS records continuous precision georeferenced and time-tagged multisensor throughputs to mass storage devices at a high aggregate rate, typically 60 MB/s for its LWIR/EO payload. MAICSS is a complete stand-alone imaging server instrument with an easy-to-use software package for either autonomous data collection or interactive airborne operation. Advanced multisensor data acquisition and onboard processing software features have been implemented for MAICSS. With the onboard processing for real time image development, correction, histogram-equalization, compression, georeference, and

  9. Web Accessibility and Accessibility Instruction

    ERIC Educational Resources Information Center

    Green, Ravonne A.; Huprich, Julia

    2009-01-01

    Section 508 of the Americans with Disabilities Act (ADA) mandates that programs and services be accessible to people with disabilities. While schools of library and information science (SLIS*) and university libraries should model accessible Web sites, this may not be the case. This article examines previous studies about the Web accessibility of…

  10. B-Pred, a structure based B-cell epitopes prediction server.

    PubMed

    Giacò, Luciano; Amicosante, Massimo; Fraziano, Maurizio; Gherardini, Pier Federico; Ausiello, Gabriele; Helmer-Citterich, Manuela; Colizzi, Vittorio; Cabibbo, Andrea

    2012-01-01

    The ability to predict immunogenic regions in selected proteins by in-silico methods has broad implications, such as allowing a quick selection of potential reagents to be used as diagnostics, vaccines, immunotherapeutics, or research tools in several branches of biological and biotechnological research. However, the prediction of antibody target sites in proteins using computational methodologies has proven to be a highly challenging task, which is likely due to the somewhat elusive nature of B-cell epitopes. This paper proposes a web-based platform for scoring potential immunological reagents based on the structures or 3D models of the proteins of interest. The method scores a protein's peptides set, which is derived from a sliding window, based on the average solvent exposure, with a filter on the average local model quality for each peptide. The platform was validated on a custom-assembled database of 1336 experimentally determined epitopes from 106 proteins for which a reliable 3D model could be obtained through standard modeling techniques. Despite showing poor sensitivity, this method can achieve a specificity of 0.70 and a positive predictive value of 0.29 by combining these two simple parameters. These values are slightly higher than those obtained with other established sequence-based or structure-based methods that have been evaluated using the same epitopes dataset. This method is implemented in a web server called B-Pred, which is accessible at http://immuno.bio.uniroma2.it/bpred. The server contains a number of original features that allow users to perform personalized reagent searches by manipulating the sliding window's width and sliding step, changing the exposure and model quality thresholds, and running sequential queries with different parameters. The B-Pred server should assist experimentalists in the rational selection of epitope antigens for a wide range of applications. PMID:22888263

  11. PredPlantPTS1: A Web Server for the Prediction of Plant Peroxisomal Proteins.

    PubMed

    Reumann, Sigrun; Buchwald, Daniela; Lingner, Thomas

    2012-01-01

    Prediction of subcellular protein localization is essential to correctly assign unknown proteins to cell organelle-specific protein networks and to ultimately determine protein function. For metazoa, several computational approaches have been developed in the past decade to predict peroxisomal proteins carrying the peroxisome targeting signal type 1 (PTS1). However, plant-specific PTS1 protein prediction methods have been lacking up to now, and pre-existing methods generally were incapable of correctly predicting low-abundance plant proteins possessing non-canonical PTS1 patterns. Recently, we presented a machine learning approach that is able to predict PTS1 proteins for higher plants (spermatophytes) with high accuracy and which can correctly identify unknown targeting patterns, i.e., novel PTS1 tripeptides and tripeptide residues. Here we describe the first plant-specific web server PredPlantPTS1 for the prediction of plant PTS1 proteins using the above-mentioned underlying models. The server allows the submission of protein sequences from diverse spermatophytes and also performs well for mosses and algae. The easy-to-use web interface provides detailed output in terms of (i) the peroxisomal targeting probability of the given sequence, (ii) information whether a particular non-canonical PTS1 tripeptide has already been experimentally verified, and (iii) the prediction scores for the single C-terminal 14 amino acid residues. The latter allows identification of predicted residues that inhibit peroxisome targeting and which can be optimized using site-directed mutagenesis to raise the peroxisome targeting efficiency. The prediction server will be instrumental in identifying low-abundance and stress-inducible peroxisomal proteins and defining the entire peroxisomal proteome of Arabidopsis and agronomically important crop plants. PredPlantPTS1 is freely accessible at ppp.gobics.de.

  12. Filtering False Positives Based on Server-Side Behaviors

    NASA Astrophysics Data System (ADS)

    Shimamura, Makoto; Hanaoka, Miyuki; Kono, Kenji

    Reducing the rate of false positives is of vital importance in enhancing the usefulness of signature-based network intrusion detection systems (NIDSs). To reduce the number of false positives, a network administrator must thoroughly investigate a lengthy list of signatures and carefully disable the ones that detect attacks that are not harmful to the administrator's environment. This is a daunting task; if some signatures are disabled by mistake, the NIDS fails to detect critical remote attacks. We designed a NIDS, TrueAlarm, to reduce the rate of false positives. Conventional NIDSs alert administrators that a malicious message has been detected, regardless of whether the message actually attempts to compromise the protected server. In contrast, TrueAlarm delays the alert until it has confirmed that an attempt has been made. The TrueAlarm NIDS cooperates with a server-side monitor that observes the protected server's behavior. TrueAlarm only alerts administrators when a server-side monitor has detected deviant server behavior that must have been caused by a message detected by a NIDS. Our experimental results revealed that TrueAlarm reduces the rate of false positives. Using actual network traffic collected over 14 days, TrueAlarm produced 46 false positives, while Snort, a conventional NIDS, produced 818.

  13. RosettaAntibody: antibody variable region homology modeling server.

    PubMed

    Sircar, Aroop; Kim, Eric T; Gray, Jeffrey J

    2009-07-01

    The RosettaAntibody server (http://antibody.graylab.jhu.edu) predicts the structure of an antibody variable region given the amino-acid sequences of the respective light and heavy chains. In an initial stage, the server identifies and displays the most sequence homologous template structures for the light and heavy framework regions and each of the complementarity determining region (CDR) loops. Subsequently, the most homologous templates are assembled into a side-chain optimized crude model, and the server returns a picture and coordinate file. For users requesting a high-resolution model, the server executes the full RosettaAntibody protocol which additionally models the hyper-variable CDR H3 loop. The high-resolution protocol also relieves steric clashes by optimizing the CDR backbone torsion angles and by simultaneously perturbing the relative orientation of the light and heavy chains. RosettaAntibody generates 2000 independent structures, and the server returns pictures, coordinate files, and detailed scoring information for the 10 top-scoring models. The 10 models enable users to use rational judgment in choosing the best model or to use the set as an ensemble for further studies such as docking. The high-resolution models generated by RosettaAntibody have been used for the successful prediction of antibody-antigen complex structures.

  14. Performance measurements of single server fuzzy queues with unreliable server using left and right method

    NASA Astrophysics Data System (ADS)

    Mueen, Zeina; Ramli, Razamin; Zaibidi, Nerda Zura

    2015-12-01

    There are a number of real life systems that can be described as a queuing system, and this paper presents a queuing system model applied in a manufacturing system example. The queuing model considered is depicted in a fuzzy environment with retrial queues and unreliable server. The stability condition state of this model is investigated and the performance measurement is obtained by adopting the left and right method. The new approach adopted in this study merges the existing α-cut interval and nonlinear programming techniques and a numerical example was considered to explain the methodology of this technique. From the numerical example, the flexibility of the method was shown graphically showing the exact real mean value of customers in the system and also the expected waiting times.

  15. Security Mechanism Based on Hospital Authentication Server for Secure Application of Implantable Medical Devices

    PubMed Central

    2014-01-01

    After two recent security attacks against implantable medical devices (IMDs) have been reported, the privacy and security risks of IMDs have been widely recognized in the medical device market and research community, since the malfunctioning of IMDs might endanger the patient's life. During the last few years, a lot of researches have been carried out to address the security-related issues of IMDs, including privacy, safety, and accessibility issues. A physician accesses IMD through an external device called a programmer, for diagnosis and treatment. Hence, cryptographic key management between IMD and programmer is important to enforce a strict access control. In this paper, a new security architecture for the security of IMDs is proposed, based on a 3-Tier security model, where the programmer interacts with a Hospital Authentication Server, to get permissions to access IMDs. The proposed security architecture greatly simplifies the key management between IMDs and programmers. Also proposed is a security mechanism to guarantee the authenticity of the patient data collected from IMD and the nonrepudiation of the physician's treatment based on it. The proposed architecture and mechanism are analyzed and compared with several previous works, in terms of security and performance. PMID:25276797

  16. Security mechanism based on Hospital Authentication Server for secure application of implantable medical devices.

    PubMed

    Park, Chang-Seop

    2014-01-01

    After two recent security attacks against implantable medical devices (IMDs) have been reported, the privacy and security risks of IMDs have been widely recognized in the medical device market and research community, since the malfunctioning of IMDs might endanger the patient's life. During the last few years, a lot of researches have been carried out to address the security-related issues of IMDs, including privacy, safety, and accessibility issues. A physician accesses IMD through an external device called a programmer, for diagnosis and treatment. Hence, cryptographic key management between IMD and programmer is important to enforce a strict access control. In this paper, a new security architecture for the security of IMDs is proposed, based on a 3-Tier security model, where the programmer interacts with a Hospital Authentication Server, to get permissions to access IMDs. The proposed security architecture greatly simplifies the key management between IMDs and programmers. Also proposed is a security mechanism to guarantee the authenticity of the patient data collected from IMD and the nonrepudiation of the physician's treatment based on it. The proposed architecture and mechanism are analyzed and compared with several previous works, in terms of security and performance.

  17. Design and implementation of streaming media server cluster based on FFMpeg.

    PubMed

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.

  18. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg

    PubMed Central

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187

  19. PhenoDB: an integrated client/server database for linkage and population genetics.

    PubMed

    Cheung, K H; Nadkarni, P; Silverstein, S; Kidd, J R; Pakstis, A J; Miller, P; Kidd, K K

    1996-08-01

    In this paper we describe PhenoDB, an Internet-accessible client/server database application for population and linkage genetics. PhenoDB stores genetic marker data on pedigrees and populations. A database for population and linkage genetics requires two core functions: data management tasks, such as interactive validation during data entry and editing, and data analysis tasks, such as generating summary population statistics and performing linkage analyses. In PhenoDB we attempt to make these tasks as easy as possible. The client/server architecture allows efficient management and manipulation of large datasets via an easy-to-use graphical interface. PhenoDB data (73 populations, 34 pedigrees, approximately 4200 individuals, and close to 80,000 typings) are stored in a generic format that can be readily exported to (or imported from) the file formats required by various existing analysis programs such as LIPED and Lathrop and Lalouel's Multipoint Linkage. PhenoDB allows performance of complex ad-hoc queries and can generate reports for use in project management. Finally, PhenoDB can produce statistical summaries such as allele frequencies, phenotype frequencies, and Chi-square tests of Hardy-Weinberg ratios of population/pedigree data. PMID:8812078

  20. A Browser-Server-Based Tele-audiology System That Supports Multiple Hearing Test Modalities

    PubMed Central

    Yao, Daoyuan; Givens, Gregg

    2015-01-01

    Abstract Introduction: Millions of global citizens suffering from hearing disorders have limited or no access to much needed hearing healthcare. Although tele-audiology presents a solution to alleviate this problem, existing remote hearing diagnosis systems support only pure-tone tests, leaving speech and other test procedures unsolved, due to the lack of software and hardware to enable communication required between audiologists and their remote patients. This article presents a comprehensive remote hearing test system that integrates the two most needed hearing test procedures: a pure-tone audiogram and a speech test. Materials and Methods: This enhanced system is composed of a Web application server, an embedded smart Internet-Bluetooth® (Bluetooth SIG, Kirkland, WA) gateway (or console device), and a Bluetooth-enabled audiometer. Several graphical user interfaces and a relational database are hosted on the application server. The console device has been designed to support the tests and auxiliary communication between the local site and the remote site. Results: The study was conducted at an audiology laboratory. Pure-tone audiogram and speech test results from volunteers tested with this tele-audiology system are comparable with results from the traditional face-to-face approach. Conclusions: This browser-server–based comprehensive tele-audiology offers a flexible platform to expand hearing services to traditionally underserved groups. PMID:25919376

  1. SuperLooper—a prediction server for the modeling of loops in globular and membrane proteins

    PubMed Central

    Hildebrand, Peter W.; Goede, Andrean; Bauer, Raphael A.; Gruening, Bjoern; Ismer, Jochen; Michalsky, Elke; Preissner, Robert

    2009-01-01

    SuperLooper provides the first online interface for the automatic, quick and interactive search and placement of loops in proteins (LIP). A database containing half a billion segments of water-soluble proteins with lengths up to 35 residues can be screened for candidate loops. A specified database containing 180 000 membrane loops in proteins (LIMP) can be searched, alternatively. Loop candidates are scored based on sequence criteria and the root mean square deviation (RMSD) of the stem atoms. Searching LIP, the average global RMSD of the respective top-ranked loops to the original loops is benchmarked to be <2 Å, for loops up to six residues or <3 Å for loops shorter than 10 residues. Other suitable conformations may be selected and directly visualized on the web server from a top-50 list. For user guidance, the sequence homology between the template and the original sequence, proline or glycine exchanges or close contacts between a loop candidate and the remainder of the protein are denoted. For membrane proteins, the expansions of the lipid bilayer are automatically modeled using the TMDET algorithm. This allows the user to select the optimal membrane protein loop concerning its relative orientation to the lipid bilayer. The server is online since October 2007 and can be freely accessed at URL: http://bioinformatics.charite.de/superlooper/ PMID:19429894

  2. SuperLooper--a prediction server for the modeling of loops in globular and membrane proteins.

    PubMed

    Hildebrand, Peter W; Goede, Andrean; Bauer, Raphael A; Gruening, Bjoern; Ismer, Jochen; Michalsky, Elke; Preissner, Robert

    2009-07-01

    SuperLooper provides the first online interface for the automatic, quick and interactive search and placement of loops in proteins (LIP). A database containing half a billion segments of water-soluble proteins with lengths up to 35 residues can be screened for candidate loops. A specified database containing 180,000 membrane loops in proteins (LIMP) can be searched, alternatively. Loop candidates are scored based on sequence criteria and the root mean square deviation (RMSD) of the stem atoms. Searching LIP, the average global RMSD of the respective top-ranked loops to the original loops is benchmarked to be <2 A, for loops up to six residues or <3 A for loops shorter than 10 residues. Other suitable conformations may be selected and directly visualized on the web server from a top-50 list. For user guidance, the sequence homology between the template and the original sequence, proline or glycine exchanges or close contacts between a loop candidate and the remainder of the protein are denoted. For membrane proteins, the expansions of the lipid bilayer are automatically modeled using the TMDET algorithm. This allows the user to select the optimal membrane protein loop concerning its relative orientation to the lipid bilayer. The server is online since October 2007 and can be freely accessed at URL: http://bioinformatics.charite.de/superlooper/.

  3. RNAex: an RNA secondary structure prediction server enhanced by high-throughput structure-probing data.

    PubMed

    Wu, Yang; Qu, Rihao; Huang, Yiming; Shi, Binbin; Liu, Mengrong; Li, Yang; Lu, Zhi John

    2016-07-01

    Several high-throughput technologies have been developed to probe RNA base pairs and loops at the transcriptome level in multiple species. However, to obtain the final RNA secondary structure, extensive effort and considerable expertise is required to statistically process the probing data and combine them with free energy models. Therefore, we developed an RNA secondary structure prediction server that is enhanced by experimental data (RNAex). RNAex is a web interface that enables non-specialists to easily access cutting-edge structure-probing data and predict RNA secondary structures enhanced by in vivo and in vitro data. RNAex annotates the RNA editing, RNA modification and SNP sites on the predicted structures. It provides four structure-folding methods, restrained MaxExpect, SeqFold, RNAstructure (Fold) and RNAfold that can be selected by the user. The performance of these four folding methods has been verified by previous publications on known structures. We re-mapped the raw sequencing data of the probing experiments to the whole genome for each species. RNAex thus enables users to predict secondary structures for both known and novel RNA transcripts in human, mouse, yeast and Arabidopsis The RNAex web server is available at http://RNAex.ncrnalab.org/.

  4. AntiAngioPred: A Server for Prediction of Anti-Angiogenic Peptides.

    PubMed

    Ettayapuram Ramaprasad, Azhagiya Singam; Singh, Sandeep; Gajendra P S, Raghava; Venkatesan, Subramanian

    2015-01-01

    The process of angiogenesis is a vital step towards the formation of malignant tumors. Anti-angiogenic peptides are therefore promising candidates in the treatment of cancer. In this study, we have collected anti-angiogenic peptides from the literature and analyzed the residue preference in these peptides. Residues like Cys, Pro, Ser, Arg, Trp, Thr and Gly are preferred while Ala, Asp, Ile, Leu, Val and Phe are not preferred in these peptides. There is a positional preference of Ser, Pro, Trp and Cys in the N terminal region and Cys, Gly and Arg in the C terminal region of anti-angiogenic peptides. Motif analysis suggests the motifs "CG-G", "TC", "SC", "SP-S", etc., which are highly prominent in anti-angiogenic peptides. Based on the primary analysis, we developed prediction models using different machine learning based methods. The maximum accuracy and MCC for amino acid composition based model is 80.9% and 0.62 respectively. The performance of the models on independent dataset is also reasonable. Based on the above study, we have developed a user-friendly web server named "AntiAngioPred" for the prediction of anti-angiogenic peptides. AntiAngioPred web server is freely accessible at http://clri.res.in/subramanian/tools/antiangiopred/index.html (mirror site: http://crdd.osdd.net/raghava/antiangiopred/). PMID:26335203

  5. PhenoDB: an integrated client/server database for linkage and population genetics.

    PubMed

    Cheung, K H; Nadkarni, P; Silverstein, S; Kidd, J R; Pakstis, A J; Miller, P; Kidd, K K

    1996-08-01

    In this paper we describe PhenoDB, an Internet-accessible client/server database application for population and linkage genetics. PhenoDB stores genetic marker data on pedigrees and populations. A database for population and linkage genetics requires two core functions: data management tasks, such as interactive validation during data entry and editing, and data analysis tasks, such as generating summary population statistics and performing linkage analyses. In PhenoDB we attempt to make these tasks as easy as possible. The client/server architecture allows efficient management and manipulation of large datasets via an easy-to-use graphical interface. PhenoDB data (73 populations, 34 pedigrees, approximately 4200 individuals, and close to 80,000 typings) are stored in a generic format that can be readily exported to (or imported from) the file formats required by various existing analysis programs such as LIPED and Lathrop and Lalouel's Multipoint Linkage. PhenoDB allows performance of complex ad-hoc queries and can generate reports for use in project management. Finally, PhenoDB can produce statistical summaries such as allele frequencies, phenotype frequencies, and Chi-square tests of Hardy-Weinberg ratios of population/pedigree data.

  6. NEP: web server for epitope prediction based on antibody neutralization of viral strains with diverse sequences.

    PubMed

    Chuang, Gwo-Yu; Liou, David; Kwong, Peter D; Georgiev, Ivelin S

    2014-07-01

    Delineation of the antigenic site, or epitope, recognized by an antibody can provide clues about functional vulnerabilities and resistance mechanisms, and can therefore guide antibody optimization and epitope-based vaccine design. Previously, we developed an algorithm for antibody-epitope prediction based on antibody neutralization of viral strains with diverse sequences and validated the algorithm on a set of broadly neutralizing HIV-1 antibodies. Here we describe the implementation of this algorithm, NEP (Neutralization-based Epitope Prediction), as a web-based server. The users must supply as input: (i) an alignment of antigen sequences of diverse viral strains; (ii) neutralization data for the antibody of interest against the same set of antigen sequences; and (iii) (optional) a structure of the unbound antigen, for enhanced prediction accuracy. The prediction results can be downloaded or viewed interactively on the antigen structure (if supplied) from the web browser using a JSmol applet. Since neutralization experiments are typically performed as one of the first steps in the characterization of an antibody to determine its breadth and potency, the NEP server can be used to predict antibody-epitope information at no additional experimental costs. NEP can be accessed on the internet at http://exon.niaid.nih.gov/nep. PMID:24782517

  7. NEP: web server for epitope prediction based on antibody neutralization of viral strains with diverse sequences.

    PubMed

    Chuang, Gwo-Yu; Liou, David; Kwong, Peter D; Georgiev, Ivelin S

    2014-07-01

    Delineation of the antigenic site, or epitope, recognized by an antibody can provide clues about functional vulnerabilities and resistance mechanisms, and can therefore guide antibody optimization and epitope-based vaccine design. Previously, we developed an algorithm for antibody-epitope prediction based on antibody neutralization of viral strains with diverse sequences and validated the algorithm on a set of broadly neutralizing HIV-1 antibodies. Here we describe the implementation of this algorithm, NEP (Neutralization-based Epitope Prediction), as a web-based server. The users must supply as input: (i) an alignment of antigen sequences of diverse viral strains; (ii) neutralization data for the antibody of interest against the same set of antigen sequences; and (iii) (optional) a structure of the unbound antigen, for enhanced prediction accuracy. The prediction results can be downloaded or viewed interactively on the antigen structure (if supplied) from the web browser using a JSmol applet. Since neutralization experiments are typically performed as one of the first steps in the characterization of an antibody to determine its breadth and potency, the NEP server can be used to predict antibody-epitope information at no additional experimental costs. NEP can be accessed on the internet at http://exon.niaid.nih.gov/nep.

  8. RNAex: an RNA secondary structure prediction server enhanced by high-throughput structure-probing data

    PubMed Central

    Wu, Yang; Qu, Rihao; Huang, Yiming; Shi, Binbin; Liu, Mengrong; Li, Yang; Lu, Zhi John

    2016-01-01

    Several high-throughput technologies have been developed to probe RNA base pairs and loops at the transcriptome level in multiple species. However, to obtain the final RNA secondary structure, extensive effort and considerable expertise is required to statistically process the probing data and combine them with free energy models. Therefore, we developed an RNA secondary structure prediction server that is enhanced by experimental data (RNAex). RNAex is a web interface that enables non-specialists to easily access cutting-edge structure-probing data and predict RNA secondary structures enhanced by in vivo and in vitro data. RNAex annotates the RNA editing, RNA modification and SNP sites on the predicted structures. It provides four structure-folding methods, restrained MaxExpect, SeqFold, RNAstructure (Fold) and RNAfold that can be selected by the user. The performance of these four folding methods has been verified by previous publications on known structures. We re-mapped the raw sequencing data of the probing experiments to the whole genome for each species. RNAex thus enables users to predict secondary structures for both known and novel RNA transcripts in human, mouse, yeast and Arabidopsis. The RNAex web server is available at http://RNAex.ncrnalab.org/. PMID:27137891

  9. OPM database and PPM web server: resources for positioning of proteins in membranes

    PubMed Central

    Lomize, Mikhail A.; Pogozheva, Irina D.; Joo, Hyeon; Mosberg, Henry I.; Lomize, Andrei L.

    2012-01-01

    The Orientations of Proteins in Membranes (OPM) database is a curated web resource that provides spatial positions of membrane-bound peptides and proteins of known three-dimensional structure in the lipid bilayer, together with their structural classification, topology and intracellular localization. OPM currently contains more than 1200 transmembrane and peripheral proteins and peptides from approximately 350 organisms that represent approximately 3800 Protein Data Bank entries. Proteins are classified into classes, superfamilies and families and assigned to 21 distinct membrane types. Spatial positions of proteins with respect to the lipid bilayer are optimized by the PPM 2.0 method that accounts for the hydrophobic, hydrogen bonding and electrostatic interactions of the proteins with the anisotropic water-lipid environment described by the dielectric constant and hydrogen-bonding profiles. The OPM database is freely accessible at http://opm.phar.umich.edu. Data can be sorted, searched or retrieved using the hierarchical classification, source organism, localization in different types of membranes. The database offers downloadable coordinates of proteins and peptides with membrane boundaries. A gallery of protein images and several visualization tools are provided. The database is supplemented by the PPM server (http://opm.phar.umich.edu/server.php) which can be used for calculating spatial positions in membranes of newly determined proteins structures or theoretical models. PMID:21890895

  10. deepTools2: a next generation web server for deep-sequencing data analysis.

    PubMed

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-01

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available.

  11. GIBS Server-side Software for Visualizing Diverse Geospatial Data Products

    NASA Astrophysics Data System (ADS)

    Roberts, J. T.; Alarcon, C.; Boller, R. A.; Cechini, M. F.; Chelikani, A.; De Cesare, C.; De Luca, A. P.; Hall, J. R.; Huang, T.; King, J.; Pressley, N. N.; Plesea, L.; Rodriguez, J. D.; Schmaltz, J. E.; Thompson, C. K.

    2015-12-01

    Server-side software used by the NASA Global Imagery Browse Services are responsible for efficiently delivering imagery for over a hundred different Earth Science data products to various Web applications and GIS tools. Images from a multitude of platforms and sensors are made available via common web protocols using the open source OnEarth software package originally developed at the Jet Propulsion Laboratory. OnEarth is a highly-scalable server module that handles raster imagery of varying projections, resolutions, formats, and coverages including newly added support for granule-based imagery. The future roadmap of OnEarth may include several new features, such as support for vector data and data access via a service, which could be developed in the future or aided by other open source software. This presentation focuses on the current capabilities of the OnEarth software used in GIBS and similar open source packages as well as potential technologies that may be utilized to handle a more diverse set of data products in the future.

  12. deepTools2: a next generation web server for deep-sequencing data analysis.

    PubMed

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-01

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. PMID:27079975

  13. wwLigCSRre: a 3D ligand-based server for hit identification and optimization

    PubMed Central

    Sperandio, O.; Petitjean, M.; Tuffery, P.

    2009-01-01

    The wwLigCSRre web server performs ligand-based screening using a 3D molecular similarity engine. Its aim is to provide an online versatile facility to assist the exploration of the chemical similarity of families of compounds, or to propose some scaffold hopping from a query compound. The service allows the user to screen several chemically diversified focused banks, such as Kinase-, CNS-, GPCR-, Ion-channel-, Antibacterial-, Anticancer- and Analgesic-focused libraries. The server also provides the possibility to screen the DrugBank and DSSTOX/Carcinogenic compounds databases. User banks can also been downloaded. The 3D similarity search combines both geometrical (3D) and physicochemical information. Starting from one 3D ligand molecule as query, the screening of such databases can lead to unraveled compound scaffold as hits or help to optimize previously identified hit molecules in a SAR (Structure activity relationship) project. wwLigCSRre can be accessed at http://bioserv.rpbs.univ-paris-diderot.fr/wwLigCSRre.html. PMID:19429687

  14. deepTools2: a next generation web server for deep-sequencing data analysis

    PubMed Central

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-01-01

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de. The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. PMID:27079975

  15. PlasMapper: a web server for drawing and auto-annotating plasmid maps.

    PubMed

    Dong, Xiaoli; Stothard, Paul; Forsythe, Ian J; Wishart, David S

    2004-07-01

    PlasMapper is a comprehensive web server that automatically generates and annotates high-quality circular plasmid maps. Taking only the plasmid/vector DNA sequence as input, PlasMapper uses sequence pattern matching and BLAST alignment to automatically identify and label common promoters, terminators, cloning sites, restriction sites, reporter genes, affinity tags, selectable marker genes, replication origins and open reading frames. PlasMapper then presents the identified features in textual form and as high-resolution, multicolored graphical output. The appearance and contents of the output can be customized in numerous ways using several supplied options. Further, PlasMapper images can be rendered in both rasterized (PNG and JPG) and vector graphics (SVG) formats to accommodate a variety of user needs or preferences. The images and textual output are of sufficient quality that they may be used directly in publications or presentations. The PlasMapper web server is freely accessible at http://wishart.biology.ualberta.ca/PlasMapper.

  16. The EarthServer Federation: State, Role, and Contribution to GEOSS

    NASA Astrophysics Data System (ADS)

    Merticariu, Vlad; Baumann, Peter

    2016-04-01

    The intercontinental EarthServer initiative has established a European datacube platform with proven scalability: known databases exceed 100 TB, and single queries have been split across more than 1,000 cloud nodes. Its service interface being rigorously based on the OGC "Big Geo Data" standards, Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS), a series of clients can dock into the services, ranging from open-source OpenLayers and QGIS over open-source NASA WorldWind to proprietary ESRI ArcGIS. Datacube fusion in a "mix and match" style is supported by the platform technolgy, the rasdaman Array Database System, which transparently federates queries so that users simply approach any node of the federation to access any data item, internally optimized for minimal data transfer. Notably, rasdaman is part of GEOSS GCI. NASA is contributing its Web WorldWind virtual globe for user-friendly data extraction, navigation, and analysis. Integrated datacube / metadata queries are contributed by CITE. Current federation members include ESA (managed by MEEO sr.l.), Plymouth Marine Laboratory (PML), the European Centre for Medium-Range Weather Forecast (ECMWF), Australia's National Computational Infrastructure, and Jacobs University (adding in Planetary Science). Further data centers have expressed interest in joining. We present the EarthServer approach, discuss its underlying technology, and illustrate the contribution this datacube platform can make to GEOSS.

  17. SERVER DEVELOPMENT FOR NSLS-II PHYSICS APPLICATIONS AND PERFORMANCE ANALYSIS

    SciTech Connect

    Shen, G.; Kraimer, M.

    2011-03-28

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. The server software under development is available via an open source sourceforge project named epics-pvdata, which consists of modules pvData, pvAccess, pvIOC, and pvService. Examples of two services that already exist in the pvService module are itemFinder, and gather. Each service uses pvData to store in-memory transient data, pvService to transfer data over the network, and pvIOC as the service engine. The performance benchmarking for pvAccess and both gather service and item finder service are presented in this paper. The performance comparison between pvAccess and Channel Access are presented also. For an ultra low emittance synchrotron radiation light source like NSLS II, the control system requirements, especially for beam control are tight. To control and manipulate the beam effectively, a use case study has been performed to satisfy the requirement and theoretical evaluation has been performed. The analysis shows that model based control is indispensable for beam commissioning and routine operation. However, there are many challenges such as how to re-use a design model for on-line model based control, and how to combine the numerical methods for modeling of a realistic lattice with the analytical techniques for analysis of its properties. To satisfy the requirements and challenges, adequate system architecture for the software framework for beam commissioning and operation is critical. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating and plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service

  18. Mathematical defense method of networked servers with controlled remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2006-05-01

    The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.

  19. The State of Energy and Performance Benchmarking for Enterprise Servers

    NASA Astrophysics Data System (ADS)

    Fanara, Andrew; Haines, Evan; Howard, Arthur

    To address the server industry’s marketing focus on performance, benchmarking organizations have played a pivotal role in developing techniques to determine the maximum achievable performance level of a system. Generally missing has been an assessment of energy use to achieve that performance. The connection between performance and energy consumption is becoming necessary information for designers and operators as they grapple with power constraints in the data center. While industry and policy makers continue to strategize about a universal metric to holistically measure IT equipment efficiency, existing server benchmarks for various workloads could provide an interim proxy to assess the relative energy efficiency of general servers. This paper discusses ideal characteristics a future energy-performance benchmark might contain, suggests ways in which current benchmarks might be adapted to provide a transitional step to this end, and notes the need for multiple workloads to provide a holistic proxy for a universal metric.

  20. Protein knot server: detection of knots in protein structures.

    PubMed

    Kolesov, Grigory; Virnau, Peter; Kardar, Mehran; Mirny, Leonid A

    2007-07-01

    KNOTS (http://knots.mit.edu) is a web server that detects knots in protein structures. Several protein structures have been reported to contain intricate knots. The physiological role of knots and their effect on folding and evolution is an area of active research. The user submits a PDB id or uploads a 3D protein structure in PDB or mmCIF format. The current implementation of the server uses the Alexander polynomial to detect knots. The results of the analysis that are presented to the user are the location of the knot in the structure, the type of the knot and an interactive visualization of the knot. The results can also be downloaded and viewed offline. The server also maintains a regularly updated list of known knots in protein structures.

  1. LassoProt: server to analyze biopolymers with lassos

    PubMed Central

    Dabrowski-Tumanski, Pawel; Niemyska, Wanda; Pasznik, Pawel; Sulkowska, Joanna I.

    2016-01-01

    The LassoProt server, http://lassoprot.cent.uw.edu.pl/, enables analysis of biopolymers with entangled configurations called lassos. The server offers various ways of visualizing lasso configurations, as well as their time trajectories, with all the results and plots downloadable. Broad spectrum of applications makes LassoProt a useful tool for biologists, biophysicists, chemists, polymer physicists and mathematicians. The server and our methods have been validated on the whole PDB, and the results constitute the database of proteins with complex lassos, supported with basic biological data. This database can serve as a source of information about protein geometry and entanglement-function correlations, as a reference set in protein modeling, and for many other purposes. PMID:27131383

  2. LassoProt: server to analyze biopolymers with lassos.

    PubMed

    Dabrowski-Tumanski, Pawel; Niemyska, Wanda; Pasznik, Pawel; Sulkowska, Joanna I

    2016-07-01

    The LassoProt server, http://lassoprot.cent.uw.edu.pl/, enables analysis of biopolymers with entangled configurations called lassos. The server offers various ways of visualizing lasso configurations, as well as their time trajectories, with all the results and plots downloadable. Broad spectrum of applications makes LassoProt a useful tool for biologists, biophysicists, chemists, polymer physicists and mathematicians. The server and our methods have been validated on the whole PDB, and the results constitute the database of proteins with complex lassos, supported with basic biological data. This database can serve as a source of information about protein geometry and entanglement-function correlations, as a reference set in protein modeling, and for many other purposes.

  3. Accessing HEP Collaboration documents using WWW and WAIS

    SciTech Connect

    Nguyen, T.D.; Buckley-Geer, E.; Ritchie, D.J.

    1995-09-01

    WAIS stands for Wide Area Information Server. It is a distributed information retrieval system. A WAIS system has a client-server architecture which consists of clients talking to a server via a TCP/IP network using the ANSI standard Z39-50 VI protocol. A freely available version (FreeWAIS) is supported by the Clearinghouse for Networked Information Discovery and Retrieval, also known as CNIDR. FreeWAIS-sf, which is the software the authors are using at Fermilab, is an extension of FreeWAIS. FreeWAIS-sf supports all the functionalities which FreeWAIS offers as well as additional indexing and searching capabilities for structured fields. World Wide Web (WWW) was originally developed by Tim Berners-Lee at CERN and is now the backbone for serving information on Internet. Here, the authors describe a system for accessing HEP collaboration documents using WWW and WAIS.

  4. Access Denied

    ERIC Educational Resources Information Center

    Villano, Matt

    2008-01-01

    Building access control (BAC)--a catchall phrase to describe the systems that control access to facilities across campus--has traditionally been handled with remarkably low-tech solutions: (1) manual locks; (2) electronic locks; and (3) ID cards with magnetic strips. Recent improvements have included smart cards and keyless solutions that make use…

  5. Open Access

    ERIC Educational Resources Information Center

    Suber, Peter

    2012-01-01

    The Internet lets us share perfect copies of our work with a worldwide audience at virtually no cost. We take advantage of this revolutionary opportunity when we make our work "open access": digital, online, free of charge, and free of most copyright and licensing restrictions. Open access is made possible by the Internet and copyright-holder…

  6. The NERC Vocabulary Server: Version 2.0

    NASA Astrophysics Data System (ADS)

    Leadbetter, A.; Lowry, R.; Clements, O.

    2012-04-01

    The NERC Vocabulary Server (NVS) has been used to publish controlled vocabularies of terms relevant to the marine environmental sciences domain since 2006 (version 0) with version 1 being introduced in 2007. It has been used for • metadata mark-up with verifiable content • populating dynamic drop down lists • semantic cross-walk between metadata schemata • so-called smart search • and the semantic enablement of Open Geospatial Consortium Web Processing Services in projects including: the NERC Data Grid; SeaDataNet; Geo-Seas; and the European Marine Observation and Data Network (EMODnet). The NVS is based on the Simple Knowledge Organization System (SKOS) model and following a version change for SKOS in 2009 there was a desire to upgrade the NVS to incorporate the changes in this standard. SKOS is based on the "concept", which it defines as a "unit of thought", that is an idea or notion such as "oil spill". The latest version of SKOS introduces the ability to aggregate concepts in both collections and schemes. The design of version 2 of the NVS uses both types of aggregation: schemes for the discovery of content through hierarchical thesauri and collections for the publication and addressing of content. Other desired changes from version 1 of the NVS included: • the removal of the potential for multiple Uniform Resource Names for the same concept to ensure consistent identification of concepts • the addition of content and technical governance information in the payload documents to provide an audit trail to users of NVS content • the removal of XML snippets from concept definitions in order to correctly validate XML serializations of the SKOS • the addition of the ability to map into external knowledge organization systems in order to extend the knowledge base • a more truly RESTful approach URL access to the NVS to make the development of applications on top of the NVS easier • and support for multiple human languages to increase the user

  7. Client-server technology meets operational-planning challenges

    SciTech Connect

    Cole, L.A.; Stansberry, C.J. Jr.; Le, K.D.; Ma, H.

    1996-07-01

    Utilities are starting to find that it is rather difficult to upgrade their proprietary energy management system, which was designed for real-time operations, fast enough to keep pace with rapidly changing business needs. To solve this problem, many utilities are building a data warehouse to store real-time data and using the data warehouse to launch client-server applications to meet their pressing business requirements. This article describes a client-server implementation launched at Tennessee Valley Authority in 1994 to meet the utility`s operational-planning needs. The article summarizes some of the lessons learned and outlines future development plans.

  8. Performance model of the Argonne Voyager multimedia server

    SciTech Connect

    Disz, T.; Olson, R.; Stevens, R.

    1997-07-01

    The Argonne Voyager Multimedia Server is being developed in the Futures Lab of the Mathematics and Computer Science Division at Argonne National Laboratory. As a network-based service for recording and playing multimedia streams, it is important that the Voyager system be capable of sustaining certain minimal levels of performance in order for it to be a viable system. In this article, the authors examine the performance characteristics of the server. As they examine the architecture of the system, they try to determine where bottlenecks lie, show actual vs potential performance, and recommend areas for improvement through custom architectures and system tuning.

  9. SAbPred: a structure-based antibody prediction server

    PubMed Central

    Dunbar, James; Krawczyk, Konrad; Leem, Jinwoo; Marks, Claire; Nowak, Jaroslaw; Regep, Cristian; Georges, Guy; Kelm, Sebastian; Popovic, Bojana; Deane, Charlotte M.

    2016-01-01

    SAbPred is a server that makes predictions of the properties of antibodies focusing on their structures. Antibody informatics tools can help improve our understanding of immune responses to disease and aid in the design and engineering of therapeutic molecules. SAbPred is a single platform containing multiple applications which can: number and align sequences; automatically generate antibody variable fragment homology models; annotate such models with estimated accuracy alongside sequence and structural properties including potential developability issues; predict paratope residues; and predict epitope patches on protein antigens. The server is available at http://opig.stats.ox.ac.uk/webapps/sabpred. PMID:27131379

  10. File caching in video-on-demand servers

    NASA Astrophysics Data System (ADS)

    Wang, Fu-Ching; Chang, Shin-Hung; Hung, Chi-Wei; Chang, Jia-Yang; Oyang, Yen-Jen; Lee, Meng-Huang

    1997-12-01

    This paper studies the file caching issue in video-on-demand (VOD) servers. Because the characteristics of video files are very different from those of conventional files, different type of caching algorithms must be developed. For VOD servers, the goal is to optimize resource allocation and tradeoff between memory and disk bandwidth. This paper first proves that resource allocation and tradeoff between memory and disk bandwidth is an NP-complete problem. Then, a heuristic algorithm, called the generalized relay mechanism, is introduced and a simulation-based optimization procedure is conducted to evaluate the effects of applying the generalized relay mechanism.

  11. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    PubMed

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.

  12. COGNAC: a web server for searching and annotating hydrogen-bonded base interactions in RNA three-dimensional structures.

    PubMed

    Firdaus-Raih, Mohd; Hamdani, Hazrina Yusof; Nadzirin, Nurul; Ramlan, Effirul Ikhwan; Willett, Peter; Artymiuk, Peter J

    2014-07-01

    Hydrogen bonds are crucial factors that stabilize a complex ribonucleic acid (RNA) molecule's three-dimensional (3D) structure. Minute conformational changes can result in variations in the hydrogen bond interactions in a particular structure. Furthermore, networks of hydrogen bonds, especially those found in tight clusters, may be important elements in structure stabilization or function and can therefore be regarded as potential tertiary motifs. In this paper, we describe a graph theoretical algorithm implemented as a web server that is able to search for unbroken networks of hydrogen-bonded base interactions and thus provide an accounting of such interactions in RNA 3D structures. This server, COGNAC (COnnection tables Graphs for Nucleic ACids), is also able to compare the hydrogen bond networks between two structures and from such annotations enable the mapping of atomic level differences that may have resulted from conformational changes due to mutations or binding events. The COGNAC server can be accessed at http://mfrlab.org/grafss/cognac. PMID:24831543

  13. CVTree3 Web Server for Whole-genome-based and Alignment-free Prokaryotic Phylogeny and Taxonomy.

    PubMed

    Zuo, Guanghong; Hao, Bailin

    2015-10-01

    A faithful phylogeny and an objective taxonomy for prokaryotes should agree with each other and ultimately follow the genome data. With the number of sequenced genomes reaching tens of thousands, both tree inference and detailed comparison with taxonomy are great challenges. We now provide one solution in the latest Release 3.0 of the alignment-free and whole-genome-based web server CVTree3. The server resides in a cluster of 64 cores and is equipped with an interactive, collapsible, and expandable tree display. It is capable of comparing the tree branching order with prokaryotic classification at all taxonomic ranks from domains down to species and strains. CVTree3 allows for inquiry by taxon names and trial on lineage modifications. In addition, it reports a summary of monophyletic and non-monophyletic taxa at all ranks as well as produces print-quality subtree figures. After giving an overview of retrospective verification of the CVTree approach, the power of the new server is described for the mega-classification of prokaryotes and determination of taxonomic placement of some newly-sequenced genomes. A few discrepancies between CVTree and 16S rRNA analyses are also summarized with regard to possible taxonomic revisions. CVTree3 is freely accessible to all users at http://tlife.fudan.edu.cn/cvtree3/ without login requirements.

  14. DDI-CPI, a server that predicts drug–drug interactions through implementing the chemical–protein interactome

    PubMed Central

    Luo, Heng; Zhang, Ping; Huang, Hui; Huang, Jialiang; Kao, Emily; Shi, Leming; He, Lin; Yang, Lun

    2014-01-01

    Drug–drug interactions (DDIs) may cause serious side-effects that draw great attention from both academia and industry. Since some DDIs are mediated by unexpected drug–human protein interactions, it is reasonable to analyze the chemical–protein interactome (CPI) profiles of the drugs to predict their DDIs. Here we introduce the DDI-CPI server, which can make real-time DDI predictions based only on molecular structure. When the user submits a molecule, the server will dock user's molecule across 611 human proteins, generating a CPI profile that can be used as a feature vector for the pre-constructed prediction model. It can suggest potential DDIs between the user's molecule and our library of 2515 drug molecules. In cross-validation and independent validation, the server achieved an AUC greater than 0.85. Additionally, by investigating the CPI profiles of predicted DDI, users can explore the PK/PD proteins that might be involved in a particular DDI. A 3D visualization of the drug-protein interaction will be provided as well. The DDI-CPI is freely accessible at http://cpi.bio-x.cn/ddi/. PMID:24875476

  15. DDI-CPI, a server that predicts drug-drug interactions through implementing the chemical-protein interactome.

    PubMed

    Luo, Heng; Zhang, Ping; Huang, Hui; Huang, Jialiang; Kao, Emily; Shi, Leming; He, Lin; Yang, Lun

    2014-07-01

    Drug-drug interactions (DDIs) may cause serious side-effects that draw great attention from both academia and industry. Since some DDIs are mediated by unexpected drug-human protein interactions, it is reasonable to analyze the chemical-protein interactome (CPI) profiles of the drugs to predict their DDIs. Here we introduce the DDI-CPI server, which can make real-time DDI predictions based only on molecular structure. When the user submits a molecule, the server will dock user's molecule across 611 human proteins, generating a CPI profile that can be used as a feature vector for the pre-constructed prediction model. It can suggest potential DDIs between the user's molecule and our library of 2515 drug molecules. In cross-validation and independent validation, the server achieved an AUC greater than 0.85. Additionally, by investigating the CPI profiles of predicted DDI, users can explore the PK/PD proteins that might be involved in a particular DDI. A 3D visualization of the drug-protein interaction will be provided as well. The DDI-CPI is freely accessible at http://cpi.bio-x.cn/ddi/. PMID:24875476

  16. BindUP: a web server for non-homology-based prediction of DNA and RNA binding proteins.

    PubMed

    Paz, Inbal; Kligun, Efrat; Bengad, Barak; Mandel-Gutfreund, Yael

    2016-07-01

    Gene expression is a multi-step process involving many layers of regulation. The main regulators of the pathway are DNA and RNA binding proteins. While over the years, a large number of DNA and RNA binding proteins have been identified and extensively studied, it is still expected that many other proteins, some with yet another known function, are awaiting to be discovered. Here we present a new web server, BindUP, freely accessible through the website http://bindup.technion.ac.il/, for predicting DNA and RNA binding proteins using a non-homology-based approach. Our method is based on the electrostatic features of the protein surface and other general properties of the protein. BindUP predicts nucleic acid binding function given the proteins three-dimensional structure or a structural model. Additionally, BindUP provides information on the largest electrostatic surface patches, visualized on the server. The server was tested on several datasets of DNA and RNA binding proteins, including proteins which do not possess DNA or RNA binding domains and have no similarity to known nucleic acid binding proteins, achieving very high accuracy. BindUP is applicable in either single or batch modes and can be applied for testing hundreds of proteins simultaneously in a highly efficient manner.

  17. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    PubMed

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs. PMID:26547847

  18. CVTree3 Web Server for Whole-genome-based and Alignment-free Prokaryotic Phylogeny and Taxonomy

    PubMed Central

    Zuo, Guanghong; Hao, Bailin

    2015-01-01

    A faithful phylogeny and an objective taxonomy for prokaryotes should agree with each other and ultimately follow the genome data. With the number of sequenced genomes reaching tens of thousands, both tree inference and detailed comparison with taxonomy are great challenges. We now provide one solution in the latest Release 3.0 of the alignment-free and whole-genome-based web server CVTree3. The server resides in a cluster of 64 cores and is equipped with an interactive, collapsible, and expandable tree display. It is capable of comparing the tree branching order with prokaryotic classification at all taxonomic ranks from domains down to species and strains. CVTree3 allows for inquiry by taxon names and trial on lineage modifications. In addition, it reports a summary of monophyletic and non-monophyletic taxa at all ranks as well as produces print-quality subtree figures. After giving an overview of retrospective verification of the CVTree approach, the power of the new server is described for the mega-classification of prokaryotes and determination of taxonomic placement of some newly-sequenced genomes. A few discrepancies between CVTree and 16S rRNA analyses are also summarized with regard to possible taxonomic revisions. CVTree3 is freely accessible to all users at http://tlife.fudan.edu.cn/cvtree3/ without login requirements. PMID:26563468

  19. BindUP: a web server for non-homology-based prediction of DNA and RNA binding proteins

    PubMed Central

    Paz, Inbal; Kligun, Efrat; Bengad, Barak; Mandel-Gutfreund, Yael

    2016-01-01

    Gene expression is a multi-step process involving many layers of regulation. The main regulators of the pathway are DNA and RNA binding proteins. While over the years, a large number of DNA and RNA binding proteins have been identified and extensively studied, it is still expected that many other proteins, some with yet another known function, are awaiting to be discovered. Here we present a new web server, BindUP, freely accessible through the website http://bindup.technion.ac.il/, for predicting DNA and RNA binding proteins using a non-homology-based approach. Our method is based on the electrostatic features of the protein surface and other general properties of the protein. BindUP predicts nucleic acid binding function given the proteins three-dimensional structure or a structural model. Additionally, BindUP provides information on the largest electrostatic surface patches, visualized on the server. The server was tested on several datasets of DNA and RNA binding proteins, including proteins which do not possess DNA or RNA binding domains and have no similarity to known nucleic acid binding proteins, achieving very high accuracy. BindUP is applicable in either single or batch modes and can be applied for testing hundreds of proteins simultaneously in a highly efficient manner. PMID:27198220

  20. [Relevance of the hemovigilance regional database for the shared medical file identity server].

    PubMed

    Doly, A; Fressy, P; Garraud, O

    2008-11-01

    The French Health Products Safety Agency coordinates the national initiative of computerization of blood products traceability within regional blood banks and public and private hospitals. The Auvergne-Loire Regional French Blood Service, based in Saint-Etienne, together with a number of public hospitals set up a transfusion data network named EDITAL. After four years of progressive implementation and experimentation, a software enabling standardized data exchange has built up a regional nominative database, endorsed by the Traceability Computerization National Committee in 2004. This database now provides secured web access to a regional transfusion history enabling biologists and all hospital and family practitioners to take in charge the patient follow-up. By running independently from the softwares of its partners, EDITAL database provides reference for the regional identity server.

  1. KOBAS 2.0: a web server for annotation and identification of enriched pathways and diseases.

    PubMed

    Xie, Chen; Mao, Xizeng; Huang, Jiaju; Ding, Yang; Wu, Jianmin; Dong, Shan; Kong, Lei; Gao, Ge; Li, Chuan-Yun; Wei, Liping

    2011-07-01

    High-throughput experimental technologies often identify dozens to hundreds of genes related to, or changed in, a biological or pathological process. From these genes one wants to identify biological pathways that may be involved and diseases that may be implicated. Here, we report a web server, KOBAS 2.0, which annotates an input set of genes with putative pathways and disease relationships based on mapping to genes with known annotations. It allows for both ID mapping and cross-species sequence similarity mapping. It then performs statistical tests to identify statistically significantly enriched pathways and diseases. KOBAS 2.0 incorporates knowledge across 1327 species from 5 pathway databases (KEGG PATHWAY, PID, BioCyc, Reactome and Panther) and 5 human disease databases (OMIM, KEGG DISEASE, FunDO, GAD and NHGRI GWAS Catalog). KOBAS 2.0 can be accessed at http://kobas.cbi.pku.edu.cn.

  2. systemsDock: a web server for network pharmacology-based prediction and analysis

    PubMed Central

    Hsin, Kun-Yi; Matsuoka, Yukiko; Asai, Yoshiyuki; Kamiyoshi, Kyota; Watanabe, Tokiko; Kawaoka, Yoshihiro; Kitano, Hiroaki

    2016-01-01

    We present systemsDock, a web server for network pharmacology-based prediction and analysis, which permits docking simulation and molecular pathway map for comprehensive characterization of ligand selectivity and interpretation of ligand action on a complex molecular network. It incorporates an elaborately designed scoring function for molecular docking to assess protein–ligand binding potential. For large-scale screening and ease of investigation, systemsDock has a user-friendly GUI interface for molecule preparation, parameter specification and result inspection. Ligand binding potentials against individual proteins can be directly displayed on an uploaded molecular interaction map, allowing users to systemically investigate network-dependent effects of a drug or drug candidate. A case study is given to demonstrate how systemsDock can be used to discover a test compound's multi-target activity. systemsDock is freely accessible at http://systemsdock.unit.oist.jp/. PMID:27131384

  3. The World-2DPAGE Constellation to promote and publish gel-based proteomics data through the ExPASy server.

    PubMed

    Hoogland, Christine; Mostaguir, Khaled; Appel, Ron D; Lisacek, Frédérique

    2008-07-21

    Since it was launched in 1993, the ExPASy server has been and is still a reference in the proteomics world. ExPASy users access various databases, many dedicated tools, and lists of resources, among other services. A significant part of resources available is devoted to two-dimensional electrophoresis data. Our latest contribution to the expansion of the pool of on-line proteomics data is the World-2DPAGE Constellation, accessible at http://world-2dpage.expasy.org/. It is composed of the established WORLD-2DPAGE List of 2-D PAGE database servers, the World-2DPAGE Portal that queries simultaneously world-wide proteomics databases, and the recently created World-2DPAGE Repository. The latter component is a public standards-compliant repository for gel-based proteomics data linked to protein identifications published in the literature. It has been set up using the Make2D-DB package, a software tool that helps building SWISS-2DPAGE-like databases on one's own Web site. The lack of necessary informatics infrastructure to build and run a dedicated website is no longer an obstacle to make proteomics data publicly accessible on the Internet. PMID:18617148

  4. Towards Direct Manipulation and Remixing of Massive Data: The EarthServer Approach

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2012-04-01

    Complex analytics on "big data" is one of the core challenges of current Earth science, generating strong requirements for on-demand processing and fil tering of massive data sets. Issues under discussion include flexibility, performance, scalability, and the heterogeneity of the information types invo lved. In other domains, high-level query languages (such as those offered by database systems) have proven successful in the quest for flexible, scalable data access interfaces to massive amounts of data. However, due to the lack of support for many of the Earth science data structures, database systems are only used for registries and catalogs, but not for the bulk of spatio-temporal data. One core information category in this field is given by coverage data. ISO 19123 defines coverages, simplifying, as a representation of a "space-time varying phenomenon". This model can express a large class of Earth science data structures, including rectified and non-rectified rasters, curvilinear grids, point clouds, TINs, general meshes, trajectories, surfaces, and solids. This abstract definition, which is too high-level to establish interoperability, is concretized by the OGC GML 3.2.1 Application Schema for Coverages Standard into an interoperable representation. The OGC Web Coverage Processing Service (WCPS) Standard defines a declarative query language on multi-dimensional raster-type coverages, such as 1D in-situ sensor timeseries, 2D EO imagery, 3D x/y/t image time series and x/y/z geophysical data, 4D x/y/z/t climate and ocean data. Hence, important ingredients for versatile coverage retrieval are given - however, this potential has not been fully unleashed by service architectures up to now. The EU FP7-INFRA project EarthServer, launched in September 2011, aims at enabling standards-based on-demand analytics over the Web for Earth science data based on an integration of W3C XQuery for alphanumeric data and OGC-WCPS for raster data. Ultimately, EarthServer will support

  5. Final Report for ''Client Server Software for the National Transport Code Collaboration''

    SciTech Connect

    John R Cary; David Alexander; Johan Carlsson; Kelly Luetkemeyer; Nathaniel Sizemore

    2004-04-30

    OAK-B135 Tech-X Corporation designed and developed all the networking code tying together the NTCC data server with the data client and the physics server with the data server and physics client. We were also solely responsible for the data and physics clients and the vast majority of the work on the data server. We also performed a number of other tasks.

  6. Gaining Access.

    ERIC Educational Resources Information Center

    Kennedy, Mike

    2000-01-01

    Discusses issues schools and universities have encountered in complying with the Americans with Disabilities Act (ADA) and making their facilities more accessible to the disabled. The ADA's vagueness and the architect's need for understanding the regulations is highlighted. (GR)

  7. Equal Access.

    ERIC Educational Resources Information Center

    De Patta, Joe

    2003-01-01

    Presents an interview with Stephen McCarthy, co-partner and president of Equal Access ADA Consulting Architects of San Diego, California, about designing schools to naturally integrate compliance with the Americans with Disabilities Act (ADA). (EV)

  8. Capital access.

    PubMed

    Towne, Jennifer

    2004-06-01

    To maintain their viability, hospitals are being compelled to invest in big capital projects such as information technology and renovation and construction. This gatefold examines the trends in credit and capital, and how they affect hospitals' access to money.

  9. Performance analysis of a fault-tolerant distributed multimedia server

    NASA Astrophysics Data System (ADS)

    Derryberry, Barbara

    1998-12-01

    The evolving demands of networks to support Webtone, H.323, AIN and other advanced services require multimedia servers that can deliver a number of value-added capabilities such as to negotiate protocols, deliver network services, and respond to QoS requests. The server is one of the primary limiters on network capacity. THe next generation server must be based upon a flexible, robust, scalable, and reliable platform to keep abreast with the revolutionary pace of service demand and development while continuing to provide the same dependability that voice networks have provided for decades. A new distributed platform, which is based upon the Totem fault-tolerant messaging system, is described. Processor and network resources are modeled and analyzed. Quantitative results are presented that assess this platform in terms of messaging capacity and performance for various architecture and design options including processing technologies and fault-tolerance modes. The impacts of fault-tolerant messaging are identified based upon analytical modeling of the proposed server architecture.

  10. Economics of Computing: The Case of Centralized Network File Servers.

    ERIC Educational Resources Information Center

    Solomon, Martin B.

    1994-01-01

    Discusses computer networking and the cost effectiveness of decentralization, including local area networks. A planned experiment with a centralized approach to the operation and management of file servers at the University of South Carolina is described that hopes to realize cost savings and the avoidance of staffing problems. (Contains four…

  11. Perspectives of IT Professionals on Employing Server Virtualization Technologies

    ERIC Educational Resources Information Center

    Sligh, Darla

    2010-01-01

    Server virtualization enables a physical computer to support multiple applications logically by decoupling the application from the hardware layer, thereby reducing operational costs and competitive in delivering IT services to their enterprise organizations. IT organizations continually examine the efficiency of their internal IT systems and…

  12. Training to Increase Safe Tray Carrying among Cocktail Servers

    ERIC Educational Resources Information Center

    Scherrer, Megan D.; Wilder, David A.

    2008-01-01

    We evaluated the effects of training on proper carrying techniques among 3 cocktail servers to increase safe tray carrying on the job and reduce participants' risk of developing musculoskeletal disorders. As participants delivered drinks to their tables, their finger, arm, and neck positions were observed and recorded. Each participant received…

  13. Two-cloud-servers-assisted secure outsourcing multiparty computation.

    PubMed

    Sun, Yi; Wen, Qiaoyan; Zhang, Yudong; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  14. BION web server: predicting non-specifically bound surface ions

    PubMed Central

    Alexov, Emil

    2013-01-01

    Motivation: Ions are essential component of the cell and frequently are found bound to various macromolecules, in particular to proteins. A binding of an ion to a protein greatly affects protein’s biophysical characteristics and needs to be taken into account in any modeling approach. However, ion’s bounded positions cannot be easily revealed experimentally, especially if they are loosely bound to macromolecular surface. Results: Here, we report a web server, the BION web server, which addresses the demand for tools of predicting surface bound ions, for which specific interactions are not crucial; thus, they are difficult to predict. The BION is easy to use web server that requires only coordinate file to be inputted, and the user is provided with various, but easy to navigate, options. The coordinate file with predicted bound ions is displayed on the output and is available for download. Availability: http://compbio.clemson.edu/bion_server/ Supplementary information: Supplementary data are available at Bioinformatics online. Contact: ealexov@clemson.edu PMID:23380591

  15. Microsoft SQL Server 6.0{reg_sign} Workbook

    SciTech Connect

    Augustenborg, E.C.

    1996-09-01

    This workbook was prepared for introductory training in the use of Microsoft SQL Server Version 6.0. The examples are all taken from the PUBS database that Microsoft distributes for training purposes or from the Microsoft Online Documentation. The merits of the relational database are presented.

  16. Multimedia medical data archive and retrieval server on the Internet

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Levine, Betty A.; Freedman, Matthew T.; Mun, Seong K.; Tang, Y. K.; Chiang, Ted T.

    1997-05-01

    The Multimedia Medical Data Archive and Retrieval Server has been installed at the imaging science and information systems (ISIS) center in Georgetown University Medical Center to provide medical data archive and retrieval support for medical researchers. The medical data includes text, images, sound, and video. All medical data is keyword indexed using a database management system and placed temporarily in a staging area and then transferred to a StorageTek one terabyte tape library system with a robotic arm for permanent archive. There are two methods of interaction with the system. The first method is to use a web browser with HTML functions to perform insert, query, update, and retrieve operations. These generate dynamic SQL calls to the database and produce StorageTek API calls to the tape library. The HTML functions consist of a database, StorageTek interface, HTTP server, common gateway interface, and Java programs. The second method is to issue a DICOM store command, which is translated by the system's DICOM server to SQL calls and then produce StorageTek API calls to the tape library. The system performs as both an Internet and a DICOM server using standard protocols such as HTTP, HTML, Java, and DICOM. Users with proper authentication can log on to the server from anywhere on the Internet using a standard web browser resulting in a user-friendly, open environment, and platform independent solution for archiving multimedia medical data. It represents a complex integration of different components including a robotic tape storage system, database, user-interface, WWW protocols, and TCP/IP networking. The user will only deal with the WWW and DICOM server components of the system, the database and robotic tape library system are transparent and the user will not know that the medical data is stored on magnetic tapes. The server provides the researchers a cost-effective tool for archiving and retrieving medical data across a TCP/IP network environment. It will

  17. Towards Big Earth Data Analytics: The EarthServer Approach

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data

  18. ASPEN--A Web-Based Application for Managing Student Server Accounts

    ERIC Educational Resources Information Center

    Sandvig, J. Christopher

    2004-01-01

    The growth of the Internet has greatly increased the demand for server-side programming courses at colleges and universities. Students enrolled in such courses must be provided with server-based accounts that support the technologies that they are learning. The process of creating, managing and removing large numbers of student server accounts is…

  19. Web servers and services for electrostatics calculations with APBS and PDB2PQR

    SciTech Connect

    Unni, Samir; Huang, Yong; Hanson, Robert M.; Tobias, Malcolm; Krishnan, Sriram; Li, Wilfred; Nielsen, Jens E.; Baker, Nathan A.

    2011-04-02

    APBS and PDB2PQR are widely utilized free software packages for biomolecular electrostatics calculations. Using the Opal toolkit, we have developed a web services framework for these software packages that enables the use of APBS and PDB2PQR by users who do not have local access to the necessary amount of computational capabilities. This not only increases accessibility of the software to a wider range of scientists, educators, and students but it also increases the availability of electrostatics calculations on portable computing platforms. Users can access this new functionality in two ways. First, an Opal-enabled version of APBS is provided in current distributions, available freely on the web. Second, we have extended the PDB2PQR web server to provide an interface for the setup, execution, and visualization electrostatics potentials as calculated by APBS. This web interface also uses the Opal framework which ensures the scalability needed to support the large APBS user community. Both of these resources are available from the APBS/PDB2PQR website: http://www.poissonboltzmann.org/.

  20. Informatics in Radiology (infoRAD): mobile wireless DICOM server system and PDA with high-resolution display: feasibility of group work for radiologists.

    PubMed

    Nakata, Norio; Kandatsu, Susumu; Suzuki, Naoki; Fukuda, Kunihiko

    2005-01-01

    A novel mobile system has been developed for use by radiologists in managing Digital Imaging and Communications in Medicine (DICOM) image data. The system consists of a mobile DICOM server (MDS) and personal digital assistants (PDAs), including a Linux PDA with a video graphics array (VGA) display (307,200 pixels, 3.7 inches). The MDS weighs 410 g, has a 60-GB hard disk drive and a built-in wireless local area network (LAN) access point, and supports a DICOM server (Central Test Node). The Linux-based MDS can be accessed with personal computers (PCs) and PDAs by means of a wireless or wired LAN, and client-server communications can be established at any time. DICOM images can be displayed by using any PDA or PC by means of a Web browser. Simultaneous access to the MDS is possible for multiple authenticated users. With most PDAs, image compression is necessary for complete display of DICOM images; however, the VGA screen can display a 512 x 512-pixel DICOM image almost in its entirety. This wireless system allows efficient management of heavy loads of lossless DICOM image data and will be useful for collaborative work by radiologists in education, conferences, and research.

  1. The EarthServer Geology Service: web coverage services for geosciences

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2014-05-01

    The EarthServer FP7 project is implementing web coverage services using the OGC WCS and WCPS standards for a range of earth science domains: cryospheric; atmospheric; oceanographic; planetary; and geological. BGS is providing the geological service (http://earthserver.bgs.ac.uk/). Geoscience has used remote sensed data from satellites and planes for some considerable time, but other areas of geosciences are less familiar with the use of coverage data. This is rapidly changing with the development of new sensor networks and the move from geological maps to geological spatial models. The BGS geology service is designed initially to address two coverage data use cases and three levels of data access restriction. Databases of remote sensed data are typically very large and commonly held offline, making it time-consuming for users to assess and then download data. The service is designed to allow the spatial selection, editing and display of Landsat and aerial photographic imagery, including band selection and contrast stretching. This enables users to rapidly view data, assess is usefulness for their purposes, and then enhance and download it if it is suitable. At present the service contains six band Landsat 7 (Blue, Green, Red, NIR 1, NIR 2, MIR) and three band false colour aerial photography (NIR, green, blue), totalling around 1Tb. Increasingly 3D spatial models are being produced in place of traditional geological maps. Models make explicit spatial information implicit on maps and thus are seen as a better way of delivering geosciences information to non-geoscientists. However web delivery of models, including the provision of suitable visualisation clients, has proved more challenging than delivering maps. The EarthServer geology service is delivering 35 surfaces as coverages, comprising the modelled superficial deposits of the Glasgow area. These can be viewed using a 3D web client developed in the EarthServer project by Fraunhofer. As well as remote sensed

  2. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.

    PubMed

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-07-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr.

  3. EarthServer - Opportunities and challenges of serving ECMWF's peta-sized archive through OGC web-services

    NASA Astrophysics Data System (ADS)

    Wagemann, Julia; Siemen, Stephan; Lamy-Thepaut, Sylvie

    2016-04-01

    ECMWF is partner of the EU-funded (Horizon2020) EarthServer-2 project and is setting up a web service that facilitates climate data access, exploration, analysis and visualisation based on Open Geospatial Consortium (OGC) standards. By doing this, ECMWF data shall become easier accessible to researchers and decision-makers of the MetOcean and GIS community. MARS is ECMWF's Meteorological Archive and Retrieval System, the world's largest archive of meteorological data. In November 2015, the MARS archive held ~87 PB of data and grew by additional ~3 PB every month. In order for users to fully benefit from the potential of a data volume beyond the PB, it is in the interest of ECMWF as a data provider to minimize the necessary data transport and yet, to provide access to the full range of data and information. The aim of the three-year project is to establish a connection between the rasdaman server technology and ECMWF's MARS archive and thus, provide access to more than 1 PB of global reanalysis served by the OGC-based standard data access protocols Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). By presenting first results of serving meteorological data, the presentation will show opportunities for data users using OGC web services. A further focus will be set on current challenges of serving climate data from ECMWF's archive and specific requirements of the MetOcean community, e.g. related to the support of GRIB and netCDF data, in order to collectively work on mature Big Data standards across all Earth Science disciplines.

  4. Hemodialysis access - self care

    MedlinePlus

    Kidney failure - chronic-hemodialysis access; Renal failure - chronic-hemodialysis access; Chronic renal insufficiency - hemodialysis access; Chronic kidney failure - hemodialysis access; Chronic renal failure - hemodialysis access; dialysis - hemodialysis access

  5. Easy Access

    ERIC Educational Resources Information Center

    Gettelman, Alan

    2009-01-01

    School and university restrooms, locker and shower rooms have specific ADA accessibility requirements that serve the needs of staff, students and campus visitors who are disabled as a result of injury, illness or age. Taking good care of them is good for the reputation of a sensitive community institution, and fosters positive public relations.…

  6. Access Denied

    ERIC Educational Resources Information Center

    Raths, David

    2012-01-01

    As faculty members add online and multimedia elements to their courses, colleges and universities across the country are realizing that there is a lot of work to be done to ensure that disabled students (and employees) have equal access to course material and university websites. Unfortunately, far too few schools consider the task a top priority.…

  7. Expanding Access

    ERIC Educational Resources Information Center

    Roach, Ronald

    2007-01-01

    There is no question that the United States lags behind most industrialized nations in consumer access to broadband Internet service. For many policy makers and activists, this shortfall marks the latest phase in the struggle to overcome the digital divide. To remedy this lack of broadband affordability and availability, one start-up firm--with…

  8. Web-Accessible Scientific Workflow System for Performance Monitoring

    SciTech Connect

    Roelof Versteeg; Roelof Versteeg; Trevor Rowe

    2006-03-01

    We describe the design and implementation of a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic Javascript and HTML/CSS) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This environment allows for reproducible, transparent result generation by a diverse user base. It has been implemented for several monitoring systems with different degrees of complexity.

  9. The NERC Vocabulary Server: Version 2.0

    NASA Astrophysics Data System (ADS)

    Leadbetter, A. M.; Lowry, R. K.

    2012-12-01

    The Natural Environment Research Council (NERC) Vocabulary Server (NVS) has been used to publish controlled vocabularies of terms relevant to marine environmental sciences since 2006 (version 0) with version 1 being introduced in 2007. It has been used for - metadata mark-up with verifiable content - populating dynamic drop down lists - semantic cross-walk between metadata schemata - so-called smart search - and the semantic enablement of Open Geospatial Consortium (OGC) Web Processing Services in the NERC Data Grid and the European Commission SeaDataNet, Geo-Seas, and European Marine Observation and Data Network (EMODnet) projects. The NVS is based on the Simple Knowledge Organization System (SKOS) model. SKOS is based on the "concept", which it defines as a "unit of thought", that is an idea or notion such as "oil spill". Following a version change for SKOS in 2009 there was a desire to upgrade the NVS to incorporate the changes. This version of SKOS introduces the ability to aggregate concepts in both collections and schemes. The design of version 2 of the NVS uses both types of aggregation: schemes for the discovery of content through hierarchical thesauri and collections for the publication and addressing of content. Other desired changes from version 1 of the NVS included: - the removal of the potential for multiple identifiers for the same concept to ensure consistent addressing of concepts - the addition of content and technical governance information in the payload documents to provide an audit trail to users of NVS content - the removal of XML snippets from concept definitions in order to correctly validate XML serializations of the SKOS - the addition of the ability to map into external knowledge organization systems in order to extend the knowledge base - a more truly RESTful approach URL access to the NVS to make the development of applications on top of the NVS easier - and support for multiple human languages to increase the user base of the NVS

  10. Introducing djatoka: a reuse friendly, open source JPEG image server

    SciTech Connect

    Chute, Ryan M; Van De Sompel, Herbert

    2008-01-01

    The ISO-standardized JPEG 2000 image format has started to attract significant attention. Support for the format is emerging in major consumer applications, and the cultural heritage community seriously considers it a viable format for digital preservation. So far, only commercial image servers with JPEG 2000 support have been available. They come with significant license fees and typically provide the customers with limited extensibility capabilities. Here, we introduce djatoka, an open source JPEG 2000 image server with an attractive basic feature set, and extensibility under control of the community of implementers. We describe djatoka, and point at demonstrations that feature digitized images of marvelous historical manuscripts from the collections of the British Library and the University of Ghent. We also caIl upon the community to engage in further development of djatoka.

  11. Peptiderive server: derive peptide inhibitors from protein-protein interactions.

    PubMed

    Sedan, Yuval; Marcu, Orly; Lyskov, Sergey; Schueler-Furman, Ora

    2016-07-01

    The Rosetta Peptiderive protocol identifies, in a given structure of a protein-protein interaction, the linear polypeptide segment suggested to contribute most to binding energy. Interactions that feature a 'hot segment', a linear peptide with significant binding energy compared to that of the complex, may be amenable for inhibition and the peptide sequence and structure derived from the interaction provide a starting point for rational drug design. Here we present a web server for Peptiderive, which is incorporated within the ROSIE web interface for Rosetta protocols. A new feature of the protocol also evaluates whether derived peptides are good candidates for cyclization. Fast computation times and clear visualization allow users to quickly assess the interaction of interest. The Peptiderive server is available for free use at http://rosie.rosettacommons.org/peptiderive. PMID:27141963

  12. User Evaluation of the NASA Technical Report Server Recommendation Service

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Bollen, Johan; Calhoun, JoAnne R.; Mackey, Calvin E.

    2004-01-01

    We present the user evaluation of two recommendation server methodologies implemented for the NASA Technical Report Server (NTRS). One methodology for generating recommendations uses log analysis to identify co-retrieval events on full-text documents. For comparison, we used the Vector Space Model (VSM) as the second methodology. We calculated cosine similarities and used the top 10 most similar documents (based on metadata) as 'recommendations'. We then ran an experiment with NASA Langley Research Center (LaRC) staff members to gather their feedback on which method produced the most 'quality' recommendations. We found that in most cases VSM outperformed log analysis of co-retrievals. However, analyzing the data revealed the evaluations may have been structurally biased in favor of the VSM generated recommendations. We explore some possible methods for combining log analysis and VSM generated recommendations and suggest areas of future work.

  13. Experience of public procurement of Open Compute servers

    NASA Astrophysics Data System (ADS)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  14. An empirical performance analysis of commodity memories in commodity servers

    SciTech Connect

    Kerbyson, D. J.; Lang, M. K.; Patino, G.

    2004-01-01

    This work details a performance study of six different commodity memories in two commodity server nodes on a number of microbenchmarks, that measure low-level performance characteristics, as well as on two applications representative of the ASCI workload. Thc memories vary both in terms of performance, including latency and bandwidths, and also in terms of their physical properties and manufacturer. Two server nodes were used; one Itanium-II Madison based system, and one Xeon based system. All the memories examined can be used within both processing nodes. This allows the performance of the memories to be directly examined while keeping all other factors within a processing node the same (processor, motherboard, operating system etc.). The results of this study show that there can be a significant difference in application performance from the different memories - by as much as 20%. Thus, by choosing the most appropriate memory for a processing node at a minimal cost differential, significant improved performance may be achievable.

  15. DSP: a protein shape string and its profile prediction server.

    PubMed

    Sun, Jiangming; Tang, Shengnan; Xiong, Wenwei; Cong, Peisheng; Li, Tonghua

    2012-07-01

    Many studies have demonstrated that shape string is an extremely important structure representation, since it is more complete than the classical secondary structure. The shape string provides detailed information also in the regions denoted random coil. But few services are provided for systematic analysis of protein shape string. To fill this gap, we have developed an accurate shape string predictor based on two innovative technologies: a knowledge-driven sequence alignment and a sequence shape string profile method. The performance on blind test data demonstrates that the proposed method can be used for accurate prediction of protein shape string. The DSP server provides both predicted shape string and sequence shape string profile for each query sequence. Using this information, the users can compare protein structure or display protein evolution in shape string space. The DSP server is available at both http://cheminfo.tongji.edu.cn/dsp/ and its main mirror http://chemcenter.tongji.edu.cn/dsp/.

  16. User Evaluation of the NASA Technical Report Server Recommendation Service

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Bollen, Johan; Calhoun, JoAnne R.; Mackey, Calvin E.

    2004-01-01

    We present the user evaluation of two recommendation server methodologies implemented for the NASA Technical Report Server (NTRS). One methodology for generating recommendations uses log analysis to identify co-retrieval events on full-text documents. For comparison, we used the Vector Space Model (VSM) as the second methodology. We calculated cosine similarities and used the top 10 most similar documents (based on metadata) as recommendations . We then ran an experiment with NASA Langley Research Center (LaRC) staff members to gather their feedback on which method produced the most quality recommendations. We found that in most cases VSM outperformed log analysis of co-retrievals. However, analyzing the data revealed the evaluations may have been structurally biased in favor of the VSM generated recommendations. We explore some possible methods for combining log analysis and VSM generated recommendations and suggest areas of future work.

  17. Peptiderive server: derive peptide inhibitors from protein–protein interactions

    PubMed Central

    Sedan, Yuval; Marcu, Orly; Lyskov, Sergey; Schueler-Furman, Ora

    2016-01-01

    The Rosetta Peptiderive protocol identifies, in a given structure of a protein–protein interaction, the linear polypeptide segment suggested to contribute most to binding energy. Interactions that feature a ‘hot segment’, a linear peptide with significant binding energy compared to that of the complex, may be amenable for inhibition and the peptide sequence and structure derived from the interaction provide a starting point for rational drug design. Here we present a web server for Peptiderive, which is incorporated within the ROSIE web interface for Rosetta protocols. A new feature of the protocol also evaluates whether derived peptides are good candidates for cyclization. Fast computation times and clear visualization allow users to quickly assess the interaction of interest. The Peptiderive server is available for free use at http://rosie.rosettacommons.org/peptiderive. PMID:27141963

  18. Berkeley Phylogenomics Group web servers: resources for structural phylogenomic analysis.

    PubMed

    Glanville, Jake Gunn; Kirshner, Dan; Krishnamurthy, Nandini; Sjölander, Kimmen

    2007-07-01

    Phylogenomic analysis addresses the limitations of function prediction based on annotation transfer, and has been shown to enable the highest accuracy in prediction of protein molecular function. The Berkeley Phylogenomics Group provides a series of web servers for phylogenomic analysis: classification of sequences to pre-computed families and subfamilies using the PhyloFacts Phylogenomic Encyclopedia, FlowerPower clustering of proteins sharing the same domain architecture, MUSCLE multiple sequence alignment, SATCHMO simultaneous alignment and tree construction and SCI-PHY subfamily identification. The PhyloBuilder web server provides an integrated phylogenomic pipeline starting with a user-supplied protein sequence, proceeding to homolog identification, multiple alignment, phylogenetic tree construction, subfamily identification and structure prediction. The Berkeley Phylogenomics Group resources are available at http://phylogenomics.berkeley.edu.

  19. Architecture: client/server moves into managed healthcare.

    PubMed

    Worthington, R

    1997-01-01

    The healthcare industry is in transition from indemnity-based products to managed care during a period marked by consolidation, competitiveness and increasingly demanding consumers. This powerful combination of industry change and customer interest requires more efficient operations and flexible information systems. Host-based managed care systems are running into limitations meeting business needs, creating a demand for client/server architectures. PMID:10164671

  20. Sharing limited Ethernet resources with a client-server model

    NASA Astrophysics Data System (ADS)

    Brownless, D. M.; Burton, P. D.

    1994-12-01

    The new control system proposed for the ISIS facility at Rutherford uses an Ethernet spine to provide mutual communications between disparate equipment, including the control computers. This paper describes the limitations imposed on the use of Ethernet in Local/Wide Area Networks and how a client-server based system can be used to circumvent them. The actual system we developed is discussed with particular reference to the problems we have faced, implementing data standards and the performance statistics attained.

  1. GPCR & company: databases and servers for GPCRs and interacting partners.

    PubMed

    Kowalsman, Noga; Niv, Masha Y

    2014-01-01

    G-protein-coupled receptors (GPCRs) are a large superfamily of membrane receptors that are involved in a wide range of signaling pathways. To fulfill their tasks, GPCRs interact with a variety of partners, including small molecules, lipids and proteins. They are accompanied by different proteins during all phases of their life cycle. Therefore, GPCR interactions with their partners are of great interest in basic cell-signaling research and in drug discovery.Due to the rapid development of computers and internet communication, knowledge and data can be easily shared within the worldwide research community via freely available databases and servers. These provide an abundance of biological, chemical and pharmacological information.This chapter describes the available web resources for investigating GPCR interactions. We review about 40 freely available databases and servers, and provide a few sentences about the essence and the data they supply. For simplification, the databases and servers were grouped under the following topics: general GPCR-ligand interactions; particular families of GPCRs and their ligands; GPCR oligomerization; GPCR interactions with intracellular partners; and structural information on GPCRs. In conclusion, a multitude of useful tools are currently available. Summary tables are provided to ease navigation between the numerous and partially overlapping resources. Suggestions for future enhancements of the online tools include the addition of links from general to specialized databases and enabling usage of user-supplied template for GPCR structural modeling. PMID:24158806

  2. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  3. DINAMelt web server for nucleic acid melting prediction

    PubMed Central

    Markham, Nicholas R.; Zuker, Michael

    2005-01-01

    The DINAMelt web server simulates the melting of one or two single-stranded nucleic acids in solution. The goal is to predict not just a melting temperature for a hybridized pair of nucleic acids, but entire equilibrium melting profiles as a function of temperature. The two molecules are not required to be complementary, nor must the two strand concentrations be equal. Competition among different molecular species is automatically taken into account. Calculations consider not only the heterodimer, but also the two possible homodimers, as well as the folding of each single-stranded molecule. For each of these five molecular species, free energies are computed by summing Boltzmann factors over every possible hybridized or folded state. For temperatures within a user-specified range, calculations predict species mole fractions together with the free energy, enthalpy, entropy and heat capacity of the ensemble. Ultraviolet (UV) absorbance at 260 nm is simulated using published extinction coefficients and computed base pair probabilities. All results are available as text files and plots are provided for species concentrations, heat capacity and UV absorbance versus temperature. This server is connected to an active research program and should evolve as new theory and software are developed. The server URL is . PMID:15980540

  4. ACFIS: a web server for fragment-based drug discovery

    PubMed Central

    Hao, Ge-Fei; Jiang, Wen; Ye, Yuan-Nong; Wu, Feng-Xu; Zhu, Xiao-Lei; Guo, Feng-Biao; Yang, Guang-Fu

    2016-01-01

    In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown ‘chemical space’ to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for ‘chemical space’, which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/. PMID:27150808

  5. [Communication server in the hospital--advantages, expenses and limitations].

    PubMed

    Jendrysiak, U

    1997-01-01

    The common situation in a hospital with multiple departments is a heterogeneous set of subsystems, one or more for each department. Today, we have a rising number of requests for an information interchange between these independent systems. The exchange of patients data has a technical and a conceptional part. Establishing a connection between more than two subsystems requires links from one system to all the others, each of them with its own code translation, interface and message transfer. A communication server is an important tool for significantly reducing the amount of work for the technical realisation. It reduces the number of interfaces, facilitates the definition, maintenance and documentation of the message structure and translation tables and helps to keep control on the message pipelines. Existing interfaces can be adapted for similar purposes. Anyway, a communication server needs a lot of configuration and it is necessary to know about low-level internetworking on different hard- and software to take advantage of its features. The code for writing files on a remote system and for process communication via TCP/IP sockets or similar techniques has to be written specifically for each communication task. There are first experiences in the university school of medicine in Mainz setting up a communication server to connect different departments. We also made a checklist for the selection of such a product. PMID:9381841

  6. (PS)2: protein structure prediction server version 3.0.

    PubMed

    Huang, Tsun-Tsao; Hwang, Jenn-Kang; Chen, Chu-Huang; Chu, Chih-Sheng; Lee, Chi-Wen; Chen, Chih-Chieh

    2015-07-01

    Protein complexes are involved in many biological processes. Examining coupling between subunits of a complex would be useful to understand the molecular basis of protein function. Here, our updated (PS)(2) web server predicts the three-dimensional structures of protein complexes based on comparative modeling; furthermore, this server examines the coupling between subunits of the predicted complex by combining structural and evolutionary considerations. The predicted complex structure could be indicated and visualized by Java-based 3D graphics viewers and the structural and evolutionary profiles are shown and compared chain-by-chain. For each subunit, considerations with or without the packing contribution of other subunits cause the differences in similarities between structural and evolutionary profiles, and these differences imply which form, complex or monomeric, is preferred in the biological condition for the subunit. We believe that the (PS)(2) server would be a useful tool for biologists who are interested not only in the structures of protein complexes but also in the coupling between subunits of the complexes. The (PS)(2) is freely available at http://ps2v3.life.nctu.edu.tw/. PMID:25943546

  7. ACFIS: a web server for fragment-based drug discovery.

    PubMed

    Hao, Ge-Fei; Jiang, Wen; Ye, Yuan-Nong; Wu, Feng-Xu; Zhu, Xiao-Lei; Guo, Feng-Biao; Yang, Guang-Fu

    2016-07-01

    In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown 'chemical space' to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for 'chemical space', which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/. PMID:27150808

  8. (PS)2: protein structure prediction server version 3.0.

    PubMed

    Huang, Tsun-Tsao; Hwang, Jenn-Kang; Chen, Chu-Huang; Chu, Chih-Sheng; Lee, Chi-Wen; Chen, Chih-Chieh

    2015-07-01

    Protein complexes are involved in many biological processes. Examining coupling between subunits of a complex would be useful to understand the molecular basis of protein function. Here, our updated (PS)(2) web server predicts the three-dimensional structures of protein complexes based on comparative modeling; furthermore, this server examines the coupling between subunits of the predicted complex by combining structural and evolutionary considerations. The predicted complex structure could be indicated and visualized by Java-based 3D graphics viewers and the structural and evolutionary profiles are shown and compared chain-by-chain. For each subunit, considerations with or without the packing contribution of other subunits cause the differences in similarities between structural and evolutionary profiles, and these differences imply which form, complex or monomeric, is preferred in the biological condition for the subunit. We believe that the (PS)(2) server would be a useful tool for biologists who are interested not only in the structures of protein complexes but also in the coupling between subunits of the complexes. The (PS)(2) is freely available at http://ps2v3.life.nctu.edu.tw/.

  9. GPCR & company: databases and servers for GPCRs and interacting partners.

    PubMed

    Kowalsman, Noga; Niv, Masha Y

    2014-01-01

    G-protein-coupled receptors (GPCRs) are a large superfamily of membrane receptors that are involved in a wide range of signaling pathways. To fulfill their tasks, GPCRs interact with a variety of partners, including small molecules, lipids and proteins. They are accompanied by different proteins during all phases of their life cycle. Therefore, GPCR interactions with their partners are of great interest in basic cell-signaling research and in drug discovery.Due to the rapid development of computers and internet communication, knowledge and data can be easily shared within the worldwide research community via freely available databases and servers. These provide an abundance of biological, chemical and pharmacological information.This chapter describes the available web resources for investigating GPCR interactions. We review about 40 freely available databases and servers, and provide a few sentences about the essence and the data they supply. For simplification, the databases and servers were grouped under the following topics: general GPCR-ligand interactions; particular families of GPCRs and their ligands; GPCR oligomerization; GPCR interactions with intracellular partners; and structural information on GPCRs. In conclusion, a multitude of useful tools are currently available. Summary tables are provided to ease navigation between the numerous and partially overlapping resources. Suggestions for future enhancements of the online tools include the addition of links from general to specialized databases and enabling usage of user-supplied template for GPCR structural modeling.

  10. A rapid bootstrap algorithm for the RAxML Web servers.

    PubMed

    Stamatakis, Alexandros; Hoover, Paul; Rougemont, Jacques

    2008-10-01

    Despite recent advances achieved by application of high-performance computing methods and novel algorithmic techniques to maximum likelihood (ML)-based inference programs, the major computational bottleneck still consists in the computation of bootstrap support values. Conducting a probably insufficient number of 100 bootstrap (BS) analyses with current ML programs on large datasets-either with respect to the number of taxa or base pairs-can easily require a month of run time. Therefore, we have developed, implemented, and thoroughly tested rapid bootstrap heuristics in RAxML (Randomized Axelerated Maximum Likelihood) that are more than an order of magnitude faster than current algorithms. These new heuristics can contribute to resolving the computational bottleneck and improve current methodology in phylogenetic analyses. Computational experiments to assess the performance and relative accuracy of these heuristics were conducted on 22 diverse DNA and AA (amino acid), single gene as well as multigene, real-world alignments containing 125 up to 7764 sequences. The standard BS (SBS) and rapid BS (RBS) values drawn on the best-scoring ML tree are highly correlated and show almost identical average support values. The weighted RF (Robinson-Foulds) distance between SBS- and RBS-based consensus trees was smaller than 6% in all cases (average 4%). More importantly, RBS inferences are between 8 and 20 times faster (average 14.73) than SBS analyses with RAxML and between 18 and 495 times faster than BS analyses with competing programs, such as PHYML or GARLI. Moreover, this performance improvement increases with alignment size. Finally, we have set up two freely accessible Web servers for this significantly improved version of RAxML that provide access to the 200-CPU cluster of the Vital-IT unit at the Swiss Institute of Bioinformatics and the 128-CPU cluster of the CIPRES project at the San Diego Supercomputer Center. These Web servers offer the possibility to conduct

  11. Climate Data Service in the FP7 EarthServer Project

    NASA Astrophysics Data System (ADS)

    Mantovani, Simone; Natali, Stefano; Barboni, Damiano; Grazia Veratelli, Maria

    2013-04-01

    EarthServer is a European Framework Program project that aims at developing and demonstrating the usability of open standards (OGC and W3C) in the management of multi-source, any-size, multi-dimensional spatio-temporal data - in short: "Big Earth Data Analytics". In order to demonstrate the feasibility of the approach, six thematic Lighthouse Applications (Cryospheric Science, Airborne Science, Atmospheric/ Climate Science, Geology, Oceanography, and Planetary Science), each with 100+ TB, are implemented. Scope of the Atmospheric/Climate lighthouse application (Climate Data Service) is to implement the system containing global to regional 2D / 3D / 4D datasets retrieved either from satellite observations, from numerical modelling and in-situ observations. Data contained in the Climate Data Service regard atmospheric profiles of temperature / humidity, aerosol content, AOT, and cloud properties provided by entities such as the European Centre for Mesoscale Weather Forecast (ECMWF), the Austrian Meteorological Service (Zentralanstalt für Meteorologie und Geodynamik - ZAMG), the Italian National Agency for new technologies, energies and sustainable development (ENEA), and the Sweden's Meteorological and Hydrological Institute (Sveriges Meteorologiska och Hydrologiska Institut -- SMHI). The system, through an easy-to-use web application permits to browse the loaded data, visualize their temporal evolution on a specific point with the creation of 2D graphs of a single field, or compare different fields on the same point (e.g. temperatures from different models and satellite observations), and visualize maps of specific fields superimposed with high resolution background maps. All data access operations and display are performed by means of OGC standard operations namely WMS, WCS and WCPS. The EarthServer project has just started its second year over a 3-years development plan: the present status the system contains subsets of the final database, with the scope of

  12. DeepBlue epigenomic data server: programmatic data retrieval and analysis of epigenome region sets

    PubMed Central

    Albrecht, Felipe; List, Markus; Bock, Christoph; Lengauer, Thomas

    2016-01-01

    Large amounts of epigenomic data are generated under the umbrella of the International Human Epigenome Consortium, which aims to establish 1000 reference epigenomes within the next few years. These data have the potential to unravel the complexity of epigenomic regulation. However, their effective use is hindered by the lack of flexible and easy-to-use methods for data retrieval. Extracting region sets of interest is a cumbersome task that involves several manual steps: identifying the relevant experiments, downloading the corresponding data files and filtering the region sets of interest. Here we present the DeepBlue Epigenomic Data Server, which streamlines epigenomic data analysis as well as software development. DeepBlue provides a comprehensive programmatic interface for finding, selecting, filtering, summarizing and downloading region sets. It contains data from four major epigenome projects, namely ENCODE, ROADMAP, BLUEPRINT and DEEP. DeepBlue comes with a user manual, examples and a well-documented application programming interface (API). The latter is accessed via the XML-RPC protocol supported by many programming languages. To demonstrate usage of the API and to enable convenient data retrieval for non-programmers, we offer an optional web interface. DeepBlue can be openly accessed at http://deepblue.mpi-inf.mpg.de. PMID:27084938

  13. Server Cache Synchronization Protocol (SCSP): component for directory-enabled networks

    NASA Astrophysics Data System (ADS)

    Costa Requena, Jose; Kantola, Raimo

    1999-11-01

    This paper describes and analyses a solution to the problem of data synchronization and replication for distributed entities such as directories in IP communication networks. We discuss the role of directories in the developing IP communications service infrastructure. The data replication solution we have implemented is based on the protocol specifications for the Internet, titled `Server Cache Synchronization Protocol' (SCSP). We review the requirements of using and maintaining data that is shared among many applications while the data resides in different physical locations. We give a brief description of the SCSP and discuss its implementation. We point out some possible applications for the protocol in a mixed IP/ISDN network. We also review some alternative approaches to directory services. In conclusion we propose the SCSP as a component for directory enabled networks--a concept emphasizing the key role of directories in the merging communications infrastructure. New emerging services manage large amounts of data. To facilitate the data management it is distributed over different locations following directory structures where the information is close to the customer location. The main goal is to achieve a global service accessible from everywhere, independently of the location where the user is accessing the service.

  14. DeepBlue epigenomic data server: programmatic data retrieval and analysis of epigenome region sets.

    PubMed

    Albrecht, Felipe; List, Markus; Bock, Christoph; Lengauer, Thomas

    2016-07-01

    Large amounts of epigenomic data are generated under the umbrella of the International Human Epigenome Consortium, which aims to establish 1000 reference epigenomes within the next few years. These data have the potential to unravel the complexity of epigenomic regulation. However, their effective use is hindered by the lack of flexible and easy-to-use methods for data retrieval. Extracting region sets of interest is a cumbersome task that involves several manual steps: identifying the relevant experiments, downloading the corresponding data files and filtering the region sets of interest. Here we present the DeepBlue Epigenomic Data Server, which streamlines epigenomic data analysis as well as software development. DeepBlue provides a comprehensive programmatic interface for finding, selecting, filtering, summarizing and downloading region sets. It contains data from four major epigenome projects, namely ENCODE, ROADMAP, BLUEPRINT and DEEP. DeepBlue comes with a user manual, examples and a well-documented application programming interface (API). The latter is accessed via the XML-RPC protocol supported by many programming languages. To demonstrate usage of the API and to enable convenient data retrieval for non-programmers, we offer an optional web interface. DeepBlue can be openly accessed at http://deepblue.mpi-inf.mpg.de.

  15. DeepBlue epigenomic data server: programmatic data retrieval and analysis of epigenome region sets.

    PubMed

    Albrecht, Felipe; List, Markus; Bock, Christoph; Lengauer, Thomas

    2016-07-01

    Large amounts of epigenomic data are generated under the umbrella of the International Human Epigenome Consortium, which aims to establish 1000 reference epigenomes within the next few years. These data have the potential to unravel the complexity of epigenomic regulation. However, their effective use is hindered by the lack of flexible and easy-to-use methods for data retrieval. Extracting region sets of interest is a cumbersome task that involves several manual steps: identifying the relevant experiments, downloading the corresponding data files and filtering the region sets of interest. Here we present the DeepBlue Epigenomic Data Server, which streamlines epigenomic data analysis as well as software development. DeepBlue provides a comprehensive programmatic interface for finding, selecting, filtering, summarizing and downloading region sets. It contains data from four major epigenome projects, namely ENCODE, ROADMAP, BLUEPRINT and DEEP. DeepBlue comes with a user manual, examples and a well-documented application programming interface (API). The latter is accessed via the XML-RPC protocol supported by many programming languages. To demonstrate usage of the API and to enable convenient data retrieval for non-programmers, we offer an optional web interface. DeepBlue can be openly accessed at http://deepblue.mpi-inf.mpg.de. PMID:27084938

  16. Prototype client/server application for biomedical text/image retrieval on the Internet

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Berman, Lewis E.; Thoma, George R.

    1996-03-01

    At the Lister Hill National Center for Biomedical Communications, a research and development division of the National Library of Medicine (NLM), a prototype image database retrieval system has been built. This medical information retrieval system (MIRS) is a client/server application which provides Internet access to biomedical databases, including both text search/retrieval and retrieval/display of medical images associated with the text records. The MIRS graphical user interface (GUI) allows a user to formulate queries by simple, intuitive interactions with screen buttons, list boxes, and edit boxes; these interactions create structured query language (SQL) queries, which are submitted to a database manager running at NLM. The result of a MIRS query is a display showing both scrollable text records and scrollable images returned for all of the 'hits' of the query. MIRS is designed as an information-delivery vehicle intended to provide access to multiple collections of medical text and image data. The database used for initial MIRS evaluation consists of national survey data collected by the National Center for Health Statistics, including 17,000 spinal x-ray images. This survey, conducted on a sample of 27,801 persons, collected demographic, socioeconomic, and medical information, including both interview results and results acquired by direct examination by physician.

  17. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  18. The ARAC client system: network-based access to ARAC

    SciTech Connect

    Leach, M J; Sumikawa, D; Webster, C

    1999-07-12

    The ARAC Client System allows users (such as emergency managers and first responders) with commonly available desktop and laptop computers to utilize the central ARAC system over the Internet or any other communications link using Internet protocols. Providing cost-effective fast access to the central ARAC system greatly expands the availability of the ARAC capability. The ARAC Client system consists of (1) local client applications running on the remote user's computer, and (2) ''site servers'' that provide secure access to selected central ARAC system capabilities and run on a scalable number of dedicated workstations residing at the central facility. The remote client applications allow users to describe a real or potential them-bio event, electronically sends this information to the central ARAC system which performs model calculations, and quickly receive and visualize the resulting graphical products. The site servers will support simultaneous access to ARAC capabilities by multiple users. The ARAC Client system is based on object-oriented client/server and distributed computing technologies using CORBA and Java, and consists of a large number of interacting components.

  19. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins

    PubMed Central

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-01-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose ‘PockDrug-Server’ to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. PMID:25956651

  20. Accessing Heliophysics Timeseries Data Through a Single Interface

    NASA Astrophysics Data System (ADS)

    Vandegriff, J. D.; Brown, L. E.; Bazell, D.; Faden, J.

    2015-12-01

    We present a simple interface for digital access to tabular time series data. The intended use for this interface is to provide a standard access mechanism for existing holdings of Heliophysics data from NASA missions. Furthermore, the interface is not intended to target any particular tool, but is intended as low-level infrastructure allowing any tool to use a single interface to access the digital content of all Heliophysics timeseries data. The interface addresses only data access, not data discovery. The query structure itself is very simple, taking only a few inputs: dataset name, time range, parameter list, and output format. The result of the query is a stream of data that is independent of the storage format on the server. Currently, most data centers offer some type of computer-to-computer access mechanism, but each has unique features and usage patterns (some give files in a specific format, some stream data, etc.) so that they all require different client code to extract data. A single, simple, lowest common denominator solution is clearly still needed. We present a prototype implementation of a service implementing our basic interface, and discuss similarities and differences between our interface and other similar existing data access mechanisms, including the web services at CDAWeb, OPeNDAP, the Das2Server mechanism of Autoplot, and options based on the VOTable mechanism from the astronomy community.URL: http://datashop.elasticbeanstalk.com/

  1. Hemodialysis access procedures

    MedlinePlus

    Kidney failure - chronic-dialysis access; Renal failure - chronic-dialysis access; Chronic renal insufficiency-dialysis access; Chronic kidney failure-dialysis access; Chronic renal failure-dialysis access

  2. TFmiR: a web server for constructing and analyzing disease-specific transcription factor and miRNA co-regulatory networks.

    PubMed

    Hamed, Mohamed; Spaniol, Christian; Nazarieh, Maryam; Helms, Volkhard

    2015-07-01

    TFmiR is a freely available web server for deep and integrative analysis of combinatorial regulatory interactions between transcription factors, microRNAs and target genes that are involved in disease pathogenesis. Since the inner workings of cells rely on the correct functioning of an enormously complex system of activating and repressing interactions that can be perturbed in many ways, TFmiR helps to better elucidate cellular mechanisms at the molecular level from a network perspective. The provided topological and functional analyses promote TFmiR as a reliable systems biology tool for researchers across the life science communities. TFmiR web server is accessible through the following URL: http://service.bioinformatik.uni-saarland.de/tfmir.

  3. TFmiR: a web server for constructing and analyzing disease-specific transcription factor and miRNA co-regulatory networks.

    PubMed

    Hamed, Mohamed; Spaniol, Christian; Nazarieh, Maryam; Helms, Volkhard

    2015-07-01

    TFmiR is a freely available web server for deep and integrative analysis of combinatorial regulatory interactions between transcription factors, microRNAs and target genes that are involved in disease pathogenesis. Since the inner workings of cells rely on the correct functioning of an enormously complex system of activating and repressing interactions that can be perturbed in many ways, TFmiR helps to better elucidate cellular mechanisms at the molecular level from a network perspective. The provided topological and functional analyses promote TFmiR as a reliable systems biology tool for researchers across the life science communities. TFmiR web server is accessible through the following URL: http://service.bioinformatik.uni-saarland.de/tfmir. PMID:25943543

  4. Computation of direct and inverse mutations with the SEGM web server (Stochastic Evolution of Genetic Motifs): an application to splice sites of human genome introns.

    PubMed

    Benard, Emmanuel; Michel, Christian J

    2009-08-01

    We present here the SEGM web server (Stochastic Evolution of Genetic Motifs) in order to study the evolution of genetic motifs both in the direct evolutionary sense (past-present) and in the inverse evolutionary sense (present-past). The genetic motifs studied can be nucleotides, dinucleotides and trinucleotides. As an example of an application of SEGM and to understand its functionalities, we give an analysis of inverse mutations of splice sites of human genome introns. SEGM is freely accessible at http://lsiit-bioinfo.u-strasbg.fr:8080/webMathematica/SEGM/SEGM.html directly or by the web site http://dpt-info.u-strasbg.fr/~michel/. To our knowledge, this SEGM web server is to date the only computational biology software in this evolutionary approach.

  5. Remote Access to Earth Science Data by Content, Space and Time

    NASA Technical Reports Server (NTRS)

    Dobinson, E.; Raskin, G.

    1998-01-01

    This demo presents the combination on an http-based client/server application that facilitates internet access to Earth science data coupled with a Java applet GUI that allows the user to graphically select data based on spatial and temporal coverage plots and scientific parameters.

  6. The Public-Access Computer Systems Forum: A Computer Conference on BITNET.

    ERIC Educational Resources Information Center

    Bailey, Charles W., Jr.

    1990-01-01

    Describes the Public Access Computer Systems Forum (PACS), a computer conference that deals with all computer systems that libraries make available to patrons. Areas discussed include how to subscribe to PACS, message distribution and retrieval capabilities, subscriber lists, documentation, generic list server capabilities, and other…

  7. Challenges in providing general access to digitized x rays over the Internet

    NASA Astrophysics Data System (ADS)

    Berman, Lewis E.; Long, L. Rodney; Thoma, George R.

    1995-01-01

    As part of a collaborative project with other government agencies, the National Library of Medicine (NLM) is engaged in the development of an electronic archive of digitized cervical and lumbar spine xrays taken in the course of nationwide health and nutrition examination surveys. One goal of the project is to provide access to the images via a client/server system specifically designed to enable radiologists located anywhere on the Internet to read them and enter their readings into a database at the server located at NLM. Another key goal is to provide general (public) access to these images, the radiologists' readings, and other collateral data taken during the survey. The system developed for such general access is based on a public domain server, the World Wide Web (WWW), and NCSA Mosaic, a distributed hypermedia client system designed for information retrieval over the Internet. This paper describes the design of the client/server software, the storage environment for the x-ray archive, the user interface, the communications software, and the public access archive. Design issues include file format, image resolution (both spatial and contrast), compression alternatives, linking collateral data with images, and the role of staging and prefetching.

  8. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    PubMed

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success. PMID:18037727

  9. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    PubMed

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.

  10. Tcoffee@igs: A web server for computing, evaluating and combining multiple sequence alignments.

    PubMed

    Poirot, Olivier; O'Toole, Eamonn; Notredame, Cedric

    2003-07-01

    This paper presents Tcoffee@igs, a new server provided to the community by Hewlet Packard computers and the Centre National de la Recherche Scientifique. This server is a web-based tool dedicated to the computation, the evaluation and the combination of multiple sequence alignments. It uses the latest version of the T-Coffee package. Given a set of unaligned sequences, the server returns an evaluated multiple sequence alignment and the associated phylogenetic tree. This server also makes it possible to evaluate the local reliability of an existing alignment and to combine several alternative multiple alignments into a single new one. Tcoffee@igs can be used for aligning protein, RNA or DNA sequences. Datasets of up to 100 sequences (2000 residues long) can be processed. The server and its documentation are available from: http://igs-server.cnrs-mrs.fr/Tcoffee/.

  11. Software Architectures Expressly Designed to Promote Open Source Development: Using the Hyrax Data Server as a Case Study

    NASA Astrophysics Data System (ADS)

    Gallagher, J.; West, P.; Potter, N.; Johnson, M.

    2009-12-01

    Data providers are continually looking for new, faster, and more functional ways of providing data to researchers in varying scientific communities. To help achieve this, OPeNDAP has developed a modular framework that provides the ability to pick and choose existing module plug-ins, as well as develop new module plug-ins, to construct customizable data servers. The data server framework uses the Data Access Protocol as the basis of its network interface, so any client application that can read that protocol can read data from one of these servers. In this poster/presentation we explore three new capabilities recently developed using new plug-in modules and how the framework's architecture enables considerable economy of design and implementation for those plug-in modules. The three capabilities are to return data packaged in a specific file format, regardless of the original format in which the data were stored; combining an existing data set with new metadata information without modifying the original data; and building and returning an RDF representation for data. In all cases these new features are independent of the data's native storage format, meaning that they will work both with all of the existing format modules as well as modules as yet undeveloped. In addition, we discuss how this architecture has characteristics that are very desirable for a highly distributed open source project where individual developers have minimal (or no) person-to-person contact. Such a design enables a project to make the most of open source development's strengths.

  12. ArchPRED: a template based loop structure prediction server.

    PubMed

    Fernandez-Fuentes, Narcis; Zhai, Jun; Fiser, András

    2006-07-01

    ArchPRED server (http://www.fiserlab.org/servers/archpred) implements a novel fragment-search based method for predicting loop conformations. The inputs to the server are the atomic coordinates of the query protein and the position of the loop. The algorithm selects candidate loop fragments from a regularly updated loop library (Search Space) by matching the length, the types of bracing secondary structures of the query and by satisfying the geometrical restraints imposed by the stem residues. Subsequently, candidate loops are inserted in the query protein framework where their side chains are rebuilt and their fit is assessed by the root mean square deviation (r.m.s.d.) of stem regions and by the number of rigid body clashes with the environment. In the final step remaining candidate loops are ranked by a Z-score that combines information on sequence similarity and fit of predicted and observed [/psi] main chain dihedral angle propensities. The final loop conformation is built in the protein structure and annealed in the environment using conjugate gradient minimization. The prediction method was benchmarked on artificially prepared search datasets where all trivial sequence similarities on the SCOP superfamily level were removed. Under these conditions it was possible to predict loops of length 4, 8 and 12 with coverage of 98, 78 and 28% with at least of 0.22, 1.38 and 2.47 A of r.m.s.d. accuracy, respectively. In a head to head comparison on loops extracted from freshly deposited new protein folds the current method outperformed in a approximately 5:1 ratio an earlier developed database search method. PMID:16844985

  13. The PDB_REDO server for macromolecular structure model optimization.

    PubMed

    Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis

    2014-07-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  14. MESSA: MEta-Server for protein Sequence Analysis

    PubMed Central

    2012-01-01

    Background Computational sequence analysis, that is, prediction of local sequence properties, homologs, spatial structure and function from the sequence of a protein, offers an efficient way to obtain needed information about proteins under study. Since reliable prediction is usually based on the consensus of many computer programs, meta-severs have been developed to fit such needs. Most meta-servers focus on one aspect of sequence analysis, while others incorporate more information, such as PredictProtein for local sequence feature predictions, SMART for domain architecture and sequence motif annotation, and GeneSilico for secondary and spatial structure prediction. However, as predictions of local sequence properties, three-dimensional structure and function are usually intertwined, it is beneficial to address them together. Results We developed a MEta-Server for protein Sequence Analysis (MESSA) to facilitate comprehensive protein sequence analysis and gather structural and functional predictions for a protein of interest. For an input sequence, the server exploits a number of select tools to predict local sequence properties, such as secondary structure, structurally disordered regions, coiled coils, signal peptides and transmembrane helices; detect homologous proteins and assign the query to a protein family; identify three-dimensional structure templates and generate structure models; and provide predictive statements about the protein's function, including functional annotations, Gene Ontology terms, enzyme classification and possible functionally associated proteins. We tested MESSA on the proteome of Candidatus Liberibacter asiaticus. Manual curation shows that three-dimensional structure models generated by MESSA covered around 75% of all the residues in this proteome and the function of 80% of all proteins could be predicted. Availability MESSA is free for non-commercial use at http://prodata.swmed.edu/MESSA/ PMID:23031578

  15. The PDB_REDO server for macromolecular structure model optimization

    PubMed Central

    Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis

    2014-01-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo­graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  16. Asynchronous data change notification between database server and accelerator controls system

    SciTech Connect

    Fu, W.; Morris, J.; Nemesure, S.

    2011-10-10

    Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.

  17. Adventures in the evolution of a high-bandwidth network for central servers

    SciTech Connect

    Swartz, K.L.; Cottrell, L.; Dart, M.

    1994-08-01

    In a small network, clients and servers may all be connected to a single Ethernet without significant performance concerns. As the number of clients on a network grows, the necessity of splitting the network into multiple sub-networks, each with a manageable number of clients, becomes clear. Less obvious is what to do with the servers. Group file servers on subnets and multihomed servers offer only partial solutions -- many other types of servers do not lend themselves to a decentralized model, and tend to collect on another, well-connected but overloaded Ethernet. The higher speed of FDDI seems to offer an easy solution, but in practice both expense and interoperability problems render FDDI a poor choice. Ethernet switches appear to permit cheaper and more reliable networking to the servers while providing an aggregate network bandwidth greater than a simple Ethernet. This paper studies the evolution of the server networks at SLAC. Difficulties encountered in the deployment of FDDI are described, as are the tools and techniques used to characterize the traffic patterns on the server network. Performance of Ethernet, FDDI, and switched Ethernet networks is analyzed, as are reliability and maintainability issues for these alternatives. The motivations for re-designing the SLAC general server network to use a switched Ethernet instead of FDDI are described, as are the reasons for choosing FDDI for the farm and firewall networks at SLAC. Guidelines are developed which may help in making this choice for other networks.

  18. Group-oriented coordination models for distributed client-server computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Hughes, Craig S.

    1994-01-01

    This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.

  19. Client/Server data serving for high performance computing

    NASA Technical Reports Server (NTRS)

    Wood, Chris

    1994-01-01

    This paper will attempt to examine the industry requirements for shared network data storage and sustained high speed (10's to 100's to thousands of megabytes per second) network data serving via the NFS and FTP protocol suite. It will discuss the current structural and architectural impediments to achieving these sorts of data rates cost effectively today on many general purpose servers and will describe and architecture and resulting product family that addresses these problems. The sustained performance levels that were achieved in the lab will be shown as well as a discussion of early customer experiences utilizing both the HIPPI-IP and ATM OC3-IP network interfaces.

  20. Deploying Server-side File System Monitoring at NERSC

    SciTech Connect

    Uselton, Andrew

    2009-05-01

    The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.

  1. Increased coverage of protein families with the blocks database servers.

    PubMed

    Henikoff, J G; Greene, E A; Pietrokovski, S; Henikoff, S

    2000-01-01

    The Blocks Database WWW (http://blocks.fhcrc.org ) and Email (blocks@blocks.fhcrc.org ) servers provide tools to search DNA and protein queries against the Blocks+ Database of multiple alignments, which represent conserved protein regions. Blocks+ nearly doubles the number of protein families included in the database by adding families from the Pfam-A, ProDom and Domo databases to those from PROSITE and PRINTS. Other new features include improved Block Searcher statistics, searching with NCBI's IMPALA program and 3D display of blocks on PDB structures.

  2. SHOT: a web server for the construction of genome phylogenies.

    PubMed

    Korbel, Jan O; Snel, Berend; Huynen, Martijn A; Bork, Peer

    2002-03-01

    With the increasing availability of genome sequences, new methods are being proposed that exploit information from complete genomes to classify species in a phylogeny. Here we present SHOT, a web server for the classification of genomes on the basis of shared gene content or the conservation of gene order that reflects the dominant, phylogenetic signal in these genomic properties. In general, the genome trees are consistent with classical gene-based phylogenies, although some interesting exceptions indicate massive horizontal gene transfer. SHOT is a useful tool for analysing the tree of life from a genomic point of view. It is available at http://www.Bork.EMBL-Heidelberg.de/SHOT.

  3. NESDIS OSPO Data Access Policy and CRM

    NASA Astrophysics Data System (ADS)

    Seybold, M. G.; Donoho, N. A.; McNamara, D.; Paquette, J.; Renkevens, T.

    2012-12-01

    The Office of Satellite and Product Operations (OSPO) is the NESDIS office responsible for satellite operations, product generation, and product distribution. Access to and distribution of OSPO data was formally established in a Data Access Policy dated February, 2011. An extension of the data access policy is the OSPO Customer Relationship Management (CRM) Database, which has been in development since 2008 and is reaching a critical level of maturity. This presentation will provide a summary of the data access policy and standard operating procedure (SOP) for handling data access requests. The tangential CRM database will be highlighted including the incident tracking system, reporting and notification capabilities, and the first comprehensive portfolio of NESDIS satellites, instruments, servers, applications, products, user organizations, and user contacts. Select examples of CRM data exploitation will show how OSPO is utilizing the CRM database to more closely satisfy the user community's satellite data needs with new product promotions, as well as new data and imagery distribution methods in OSPO's Environmental Satellite Processing Center (ESPC). In addition, user services and outreach initiatives from the Satellite Products and Services Division will be highlighted.

  4. Managing attribute--value clinical trials data using the ACT/DB client-server database system.

    PubMed

    Nadkarni, P M; Brandt, C; Frawley, S; Sayward, F G; Einbinder, R; Zelterman, D; Schacter, L; Miller, P L

    1998-01-01

    ACT/DB is a client-server database application for storing clinical trials and outcomes data, which is currently undergoing initial pilot use. It stores most of its data in entity-attribute-value form. Such data are segregated according to data type to allow indexing by value when possible, and binary large object data are managed in the same way as other data. ACT/DB lets an investigator design a study rapidly by defining the parameters (or attributes) that are to be gathered, as well as their logical grouping for purposes of display and data entry. ACT/DB generates customizable data entry. The data can be viewed through several standard reports as well as exported as text to external analysis programs. ACT/DB is designed to encourage reuse of parameters across multiple studies and has facilities for dictionary search and maintenance. It uses a Microsoft Access client running on Windows 95 machines, which communicates with an Oracle server running on a UNIX platform. ACT/DB is being used to manage the data for seven studies in its initial deployment.

  5. CID-miRNA: A web server for prediction of novel miRNA precursors in human genome

    SciTech Connect

    Tyagi, Sonika; Vaz, Candida; Gupta, Vipin; Bhatia, Rohit; Maheshwari, Sachin; Srinivasan, Ashwin; Bhattacharya, Alok

    2008-08-08

    microRNAs (miRNA) are a class of non-protein coding functional RNAs that are thought to regulate expression of target genes by direct interaction with mRNAs. miRNAs have been identified through both experimental and computational methods in a variety of eukaryotic organisms. Though these approaches have been partially successful, there is a need to develop more tools for detection of these RNAs as they are also thought to be present in abundance in many genomes. In this report we describe a tool and a web server, named CID-miRNA, for identification of miRNA precursors in a given DNA sequence, utilising secondary structure-based filtering systems and an algorithm based on stochastic context free grammar trained on human miRNAs. CID-miRNA analyses a given sequence using a web interface, for presence of putative miRNA precursors and the generated output lists all the potential regions that can form miRNA-like structures. It can also scan large genomic sequences for the presence of potential miRNA precursors in its stand-alone form. The web server can be accessed at (http://mirna.jnu.ac.in/cidmirna/)

  6. RNAMethPre: A Web Server for the Prediction and Query of mRNA m6A Sites

    PubMed Central

    Zhang, Yaou; Sun, Zhirong

    2016-01-01

    N6-Methyladenosine (m6A) is the most common mRNA modification; it occurs in a wide range of taxon and is associated with many key biological processes. High-throughput experiments have identified m6A-peaks and sites across the transcriptome, but studies of m6A sites at the transcriptome-wide scale are limited to a few species and tissue types. Therefore, the computational prediction of mRNA m6A sites has become an important strategy. In this study, we integrated multiple features of mRNA (flanking sequences, local secondary structure information, and relative position information) and trained a SVM classifier to predict m6A sites in mammalian mRNA sequences. Our method achieves ideal performance in both cross-validation tests and rigorous independent dataset tests. The server also provides a comprehensive database of predicted transcriptome-wide m6A sites and curated m6A-seq peaks from the literature for both human and mouse, and these can be queried and visualized in a genome browser. The RNAMethPre web server provides a user-friendly tool for the prediction and query of mRNA m6A sites, which is freely accessible for public use at http://bioinfo.tsinghua.edu.cn/RNAMethPre/index.html. PMID:27723837

  7. SurvNet: a web server for identifying network-based biomarkers that most correlate with patient survival data.

    PubMed

    Li, Jun; Roebuck, Paul; Grünewald, Stefan; Liang, Han

    2012-07-01

    An important task in biomedical research is identifying biomarkers that correlate with patient clinical data, and these biomarkers then provide a critical foundation for the diagnosis and treatment of disease. Conventionally, such an analysis is based on individual genes, but the results are often noisy and difficult to interpret. Using a biological network as the searching platform, network-based biomarkers are expected to be more robust and provide deep insights into the molecular mechanisms of disease. We have developed a novel bioinformatics web server for identifying network-based biomarkers that most correlate with patient survival data, SurvNet. The web server takes three input files: one biological network file, representing a gene regulatory or protein interaction network; one molecular profiling file, containing any type of gene- or protein-centred high-throughput biological data (e.g. microarray expression data or DNA methylation data); and one patient survival data file (e.g. patients' progression-free survival data). Given user-defined parameters, SurvNet will automatically search for subnetworks that most correlate with the observed patient survival data. As the output, SurvNet will generate a list of network biomarkers and display them through a user-friendly interface. SurvNet can be accessed at http://bioinformatics.mdanderson.org/main/SurvNet.

  8. WAMI: a web server for the analysis of minisatellite maps

    PubMed Central

    2010-01-01

    Background Minisatellites are genomic loci composed of tandem arrays of short repetitive DNA segments. A minisatellite map is a sequence of symbols that represents the tandem repeat array such that the set of symbols is in one-to-one correspondence with the set of distinct repeats. Due to variations in repeat type and organization as well as copy number, the minisatellite maps have been widely used in forensic and population studies. In either domain, researchers need to compare the set of maps to each other, to build phylogenetic trees, to spot structural variations, and to study duplication dynamics. Efficient algorithms for these tasks are required to carry them out reliably and in reasonable time. Results In this paper we present WAMI, a web-server for the analysis of minisatellite maps. It performs the above mentioned computational tasks using efficient algorithms that take the model of map evolution into account. The WAMI interface is easy to use and the results of each analysis task are visualized. Conclusions To the best of our knowledge, WAMI is the first server providing all these computational facilities to the minisatellite community. The WAMI web-interface and the source code of the underlying programs are available at http://www.nubios.nileu.edu.eg/tools/wami. PMID:20525398

  9. Seq2Ref: a web server to facilitate functional interpretation

    PubMed Central

    2013-01-01

    Background The size of the protein sequence database has been exponentially increasing due to advances in genome sequencing. However, experimentally characterized proteins only constitute a small portion of the database, such that the majority of sequences have been annotated by computational approaches. Current automatic annotation pipelines inevitably introduce errors, making the annotations unreliable. Instead of such error-prone automatic annotations, functional interpretation should rely on annotations of ‘reference proteins’ that have been experimentally characterized or manually curated. Results The Seq2Ref server uses BLAST to detect proteins homologous to a query sequence and identifies the reference proteins among them. Seq2Ref then reports publications with experimental characterizations of the identified reference proteins that might be relevant to the query. Furthermore, a plurality-based rating system is developed to evaluate the homologous relationships and rank the reference proteins by their relevance to the query. Conclusions The reference proteins detected by our server will lend insight into proteins of unknown function and provide extensive information to develop in-depth understanding of uncharacterized proteins. Seq2Ref is available at: http://prodata.swmed.edu/seq2ref. PMID:23356573

  10. A web-server of cell type discrimination system.

    PubMed

    Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells. PMID:24578634

  11. AISMIG--an interactive server-side molecule image generator.

    PubMed

    Bohne-Lang, Andreas; Groch, Wolf-Dieter; Ranzinger, René

    2005-07-01

    Using a web browser without additional software and generating interactive high quality and high resolution images of bio-molecules is no longer a problem. Interactive visualization of 3D molecule structures by Internet browsers normally is not possible without additional software and the disadvantage of browser-based structure images (e.g. by a Java applet) is their low resolution. Scientists who want to generate 3D molecular images with high quality and high resolution (e.g. for publications or to render a molecule for a poster) therefore require separately installed software that is often not easy to use. The alternative concept is an interactive server-side rendering application that can be interfaced with any web browser. Thus it combines the advantage of the web application with the high-end rendering of a raytracer. This article addresses users who want to generate high quality images from molecular structures and do not have software installed locally for structure visualization. Often people do not have a structure viewer, such as RasMol or Chime (or even Java) installed locally but want to visualize a molecule structure interactively. AISMIG (An Interactive Server-side Molecule Image Generator) is a web service that provides a visualization of molecule structures in such cases. AISMIG-URL: http://www.dkfz-heidelberg.de/spec/aismig/. PMID:15980568

  12. CTLPScanner: a web server for chromothripsis-like pattern detection

    PubMed Central

    Yang, Jian; Liu, Jixiang; Ouyang, Liang; Chen, Yi; Liu, Bo; Cai, Haoyang

    2016-01-01

    Chromothripsis is a recently observed phenomenon in cancer cells in which one or several chromosomes shatter into pieces with subsequent inaccurate reassembly and clonal propagation. This type of event generates a potentially vast number of mutations within a relatively short-time period, and has been considered as a new paradigm in cancer development. Despite recent advances, much work is still required to better understand the molecular mechanisms of this phenomenon, and thus an easy-to-use tool is in urgent need for automatically detecting and annotating chromothripsis. Here we present CTLPScanner, a web server for detection of chromothripsis-like pattern (CTLP) in genomic array data. The output interface presents intuitive graphical representations of detected chromosome pulverization region, as well as detailed results in table format. CTLPScanner also provides additional information for associated genes in chromothripsis region to help identify the potential candidates involved in tumorigenesis. To assist in performing meta-data analysis, we integrated over 50 000 pre-processed genomic arrays from The Cancer Genome Atlas and Gene Expression Omnibus into CTLPScanner. The server allows users to explore the presence of chromothripsis signatures from public data resources, without carrying out any local data processing. CTLPScanner is freely available at http://cgma.scu.edu.cn/CTLPScanner/. PMID:27185889

  13. Berkeley PHOG: PhyloFacts orthology group prediction web server.

    PubMed

    Datta, Ruchira S; Meacham, Christopher; Samad, Bushra; Neyer, Christoph; Sjölander, Kimmen

    2009-07-01

    Ortholog detection is essential in functional annotation of genomes, with applications to phylogenetic tree construction, prediction of protein-protein interaction and other bioinformatics tasks. We present here the PHOG web server employing a novel algorithm to identify orthologs based on phylogenetic analysis. Results on a benchmark dataset from the TreeFam-A manually curated orthology database show that PHOG provides a combination of high recall and precision competitive with both InParanoid and OrthoMCL, and allows users to target different taxonomic distances and precision levels through the use of tree-distance thresholds. For instance, OrthoMCL-DB achieved 76% recall and 66% precision on this dataset; at a slightly higher precision (68%) PHOG achieves 10% higher recall (86%). InParanoid achieved 87% recall at 24% precision on this dataset, while a PHOG variant designed for high recall achieves 88% recall at 61% precision, increasing precision by 37% over InParanoid. PHOG is based on pre-computed trees in the PhyloFacts resource, and contains over 366 K orthology groups with a minimum of three species. Predicted orthologs are linked to GO annotations, pathway information and biological literature. The PHOG web server is available at http://phylofacts.berkeley.edu/orthologs/.

  14. A web-server of cell type discrimination system.

    PubMed

    Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan

    2014-01-01

    Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.

  15. EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2013-04-01

    EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be

  16. PhyreStorm: A Web Server for Fast Structural Searches Against the PDB.

    PubMed

    Mezulis, Stefans; Sternberg, Michael J E; Kelley, Lawrence A

    2016-02-22

    The identification of structurally similar proteins can provide a range of biological insights, and accordingly, the alignment of a query protein to a database of experimentally determined protein structures is a technique commonly used in the fields of structural and evolutionary biology. The PhyreStorm Web server has been designed to provide comprehensive, up-to-date and rapid structural comparisons against the Protein Data Bank (PDB) combined with a rich and intuitive user interface. It is intended that this facility will enable biologists inexpert in bioinformatics access to a powerful tool for exploring protein structure relationships beyond what can be achieved by sequence analysis alone. By partitioning the PDB into similar structures, PhyreStorm is able to quickly discard the majority of structures that cannot possibly align well to a query protein, reducing the number of alignments required by an order of magnitude. PhyreStorm is capable of finding 93±2% of all highly similar (TM-score>0.7) structures in the PDB for each query structure, usually in less than 60s. PhyreStorm is available at http://www.sbg.bio.ic.ac.uk/phyrestorm/. PMID:26517951

  17. g:Profiler-a web server for functional interpretation of gene lists (2016 update).

    PubMed

    Reimand, Jüri; Arak, Tambet; Adler, Priit; Kolberg, Liis; Reisberg, Sulev; Peterson, Hedi; Vilo, Jaak

    2016-07-01

    Functional enrichment analysis is a key step in interpreting gene lists discovered in diverse high-throughput experiments. g:Profiler studies flat and ranked gene lists and finds statistically significant Gene Ontology terms, pathways and other gene function related terms. Translation of hundreds of gene identifiers is another core feature of g:Profiler. Since its first publication in 2007, our web server has become a popular tool of choice among basic and translational researchers. Timeliness is a major advantage of g:Profiler as genome and pathway information is synchronized with the Ensembl database in quarterly updates. g:Profiler supports 213 species including mammals and other vertebrates, plants, insects and fungi. The 2016 update of g:Profiler introduces several novel features. We have added further functional datasets to interpret gene lists, including transcription factor binding site predictions, Mendelian disease annotations, information about protein expression and complexes and gene mappings of human genetic polymorphisms. Besides the interactive web interface, g:Profiler can be accessed in computational pipelines using our R package, Python interface and BioJS component. g:Profiler is freely available at http://biit.cs.ut.ee/gprofiler/.

  18. The HADDOCK2.2 Web Server: User-Friendly Integrative Modeling of Biomolecular Complexes.

    PubMed

    van Zundert, G C P; Rodrigues, J P G L M; Trellet, M; Schmitz, C; Kastritis, P L; Karaca, E; Melquiond, A S J; van Dijk, M; de Vries, S J; Bonvin, A M J J

    2016-02-22

    The prediction of the quaternary structure of biomolecular macromolecules is of paramount importance for fundamental understanding of cellular processes and drug design. In the era of integrative structural biology, one way of increasing the accuracy of modeling methods used to predict the structure of biomolecular complexes is to include as much experimental or predictive information as possible in the process. This has been at the core of our information-driven docking approach HADDOCK. We present here the updated version 2.2 of the HADDOCK portal, which offers new features such as support for mixed molecule types, additional experimental restraints and improved protocols, all of this in a user-friendly interface. With well over 6000 registered users and 108,000 jobs served, an increasing fraction of which on grid resources, we hope that this timely upgrade will help the community to solve important biological questions and further advance the field. The HADDOCK2.2 Web server is freely accessible to non-profit users at http://haddock.science.uu.nl/services/HADDOCK2.2.

  19. The HADDOCK2.2 Web Server: User-Friendly Integrative Modeling of Biomolecular Complexes.

    PubMed

    van Zundert, G C P; Rodrigues, J P G L M; Trellet, M; Schmitz, C; Kastritis, P L; Karaca, E; Melquiond, A S J; van Dijk, M; de Vries, S J; Bonvin, A M J J

    2016-02-22

    The prediction of the quaternary structure of biomolecular macromolecules is of paramount importance for fundamental understanding of cellular processes and drug design. In the era of integrative structural biology, one way of increasing the accuracy of modeling methods used to predict the structure of biomolecular complexes is to include as much experimental or predictive information as possible in the process. This has been at the core of our information-driven docking approach HADDOCK. We present here the updated version 2.2 of the HADDOCK portal, which offers new features such as support for mixed molecule types, additional experimental restraints and improved protocols, all of this in a user-friendly interface. With well over 6000 registered users and 108,000 jobs served, an increasing fraction of which on grid resources, we hope that this timely upgrade will help the community to solve important biological questions and further advance the field. The HADDOCK2.2 Web server is freely accessible to non-profit users at http://haddock.science.uu.nl/services/HADDOCK2.2. PMID:26410586

  20. g:Profiler-a web server for functional interpretation of gene lists (2016 update).

    PubMed

    Reimand, Jüri; Arak, Tambet; Adler, Priit; Kolberg, Liis; Reisberg, Sulev; Peterson, Hedi; Vilo, Jaak

    2016-07-01

    Functional enrichment analysis is a key step in interpreting gene lists discovered in diverse high-throughput experiments. g:Profiler studies flat and ranked gene lists and finds statistically significant Gene Ontology terms, pathways and other gene function related terms. Translation of hundreds of gene identifiers is another core feature of g:Profiler. Since its first publication in 2007, our web server has become a popular tool of choice among basic and translational researchers. Timeliness is a major advantage of g:Profiler as genome and pathway information is synchronized with the Ensembl database in quarterly updates. g:Profiler supports 213 species including mammals and other vertebrates, plants, insects and fungi. The 2016 update of g:Profiler introduces several novel features. We have added further functional datasets to interpret gene lists, including transcription factor binding site predictions, Mendelian disease annotations, information about protein expression and complexes and gene mappings of human genetic polymorphisms. Besides the interactive web interface, g:Profiler can be accessed in computational pipelines using our R package, Python interface and BioJS component. g:Profiler is freely available at http://biit.cs.ut.ee/gprofiler/. PMID:27098042

  1. g:Profiler—a web server for functional interpretation of gene lists (2016 update)

    PubMed Central

    Reimand, Jüri; Arak, Tambet; Adler, Priit; Kolberg, Liis; Reisberg, Sulev; Peterson, Hedi; Vilo, Jaak

    2016-01-01

    Functional enrichment analysis is a key step in interpreting gene lists discovered in diverse high-throughput experiments. g:Profiler studies flat and ranked gene lists and finds statistically significant Gene Ontology terms, pathways and other gene function related terms. Translation of hundreds of gene identifiers is another core feature of g:Profiler. Since its first publication in 2007, our web server has become a popular tool of choice among basic and translational researchers. Timeliness is a major advantage of g:Profiler as genome and pathway information is synchronized with the Ensembl database in quarterly updates. g:Profiler supports 213 species including mammals and other vertebrates, plants, insects and fungi. The 2016 update of g:Profiler introduces several novel features. We have added further functional datasets to interpret gene lists, including transcription factor binding site predictions, Mendelian disease annotations, information about protein expression and complexes and gene mappings of human genetic polymorphisms. Besides the interactive web interface, g:Profiler can be accessed in computational pipelines using our R package, Python interface and BioJS component. g:Profiler is freely available at http://biit.cs.ut.ee/gprofiler/. PMID:27098042

  2. Point Cloud Server (pcs) : Point Clouds In-Base Management and Processing

    NASA Astrophysics Data System (ADS)

    Cura, R.; Perret, J.; Paparoditis, N.

    2015-08-01

    In addition to the traditional Geographic Information System (GIS) data such as images and vectors, point cloud data has become more available. It is appreciated for its precision and true three-Dimensional (3D) nature. However, managing the point cloud can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a complete and efficient point cloud management system based on a database server that works on groups of points rather than individual points. This system is specifically designed to solve all the needs of point cloud users: fast loading, compressed storage, powerful filtering, easy data access and exporting, and integrated processing. Moreover, the system fully integrates metadata (like sensor position) and can conjointly use point clouds with images, vectors, and other point clouds. The system also offers in-base processing for easy prototyping and parallel processing and can scale well. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the system will several billion points of point clouds from Lidar (aerial and terrestrial ) and stereo-vision. We demonstrate ~ 400 million pts/h loading speed, user-transparent greater than 2 to 4:1 compression ratio, filtering in the approximately 50 ms range, and output of about a million pts/s, along with classical processing, such as object detection.

  3. The Pan-STARRS data server and integrated data query tool

    NASA Astrophysics Data System (ADS)

    Guo, Jhen-Kuei; Chen, Wen-Ping; Lin, Chien-Cheng; Chen, Ying-Tung; Lin, Hsing-Wen

    2013-06-01

    The Pan-STARRS project is operated by an international consortium. Located in Haleakala, Hawaii, the Pan-STARRS telescope system patrols the entire visible sky several times a month, with an aim to identify and characterize varying celestial objects of phenomena or in brightness (supernovae, novae, variable stars, etc) or in position (comets, asteroids, near-earth objects, X-planet etc.) PS1 science mission has started officially from May, 2010 and expects to end in the end of 2013. As of early 2012, every patch of sky observable from Hawaii has been observed in at least 5 bands (g', r', i', z', y') for 5 to 40 epochs. We have set up a data depository at NCU to serve the users in Taiwan. The massive amounts of Pan-STARRS data are downloaded via Internet from the Institute for Astronomy, University of Hawaii whenever new observations are obtained and processed. So far we have stored a total of 200 TB worth of data. In addition to star/galaxy catalogs, a postage stamp server provides access to FITS images. The Pan-STARRS Published Science Products Subsystem (PSPS) has recently passed its operational readiness, that provides users to query individual PS1 measurements. Here we present the data query tool to interface with the PS1 catalogs and postage stamp images, together with other complementary databases such as 2MASS and other data at IRSA (NASA/IPAC Infrared Science Archive).

  4. PhyleasProg: a user-oriented web server for wide evolutionary analyses.

    PubMed

    Busset, Joël; Cabau, Cédric; Meslin, Camille; Pascal, Géraldine

    2011-07-01

    Evolutionary analyses of biological data are becoming a prerequisite in many fields of biology. At a time of high-throughput data analysis, phylogenetics is often a necessary complementary tool for biologists to understand, compare and identify the functions of sequences. But available bioinformatics tools are frequently not easy for non-specialists to use. We developed PhyleasProg (http://phyleasprog.inra.fr), a user-friendly web server as a turnkey tool dedicated to evolutionary analyses. PhyleasProg can help biologists with little experience in evolutionary methodologies by analysing their data in a simple and robust way, using methods corresponding to robust standards. Via a very intuitive web interface, users only need to enter a list of Ensembl protein IDs and a list of species as inputs. After dynamic computations, users have access to phylogenetic trees, positive/purifying selection data (on site and branch-site models), with a display of these results on the protein sequence and on a 3D structure model, and the synteny environment of related genes. This connection between different domains of phylogenetics opens the way to new biological analyses for the discovery of the function and structure of proteins.

  5. The HHpred interactive server for protein homology detection and structure prediction.

    PubMed

    Söding, Johannes; Biegert, Andreas; Lupas, Andrei N

    2005-07-01

    HHpred is a fast server for remote protein homology detection and structure prediction and is the first to implement pairwise comparison of profile hidden Markov models (HMMs). It allows to search a wide choice of databases, such as the PDB, SCOP, Pfam, SMART, COGs and CDD. It accepts a single query sequence or a multiple alignment as input. Within only a few minutes it returns the search results in a user-friendly format similar to that of PSI-BLAST. Search options include local or global alignment and scoring secondary structure similarity. HHpred can produce pairwise query-template alignments, multiple alignments of the query with a set of templates selected from the search results, as well as 3D structural models that are calculated by the MODELLER software from these alignments. A detailed help facility is available. As a demonstration, we analyze the sequence of SpoVT, a transcriptional regulator from Bacillus subtilis. HHpred can be accessed at http://protevo.eb.tuebingen.mpg.de/hhpred.

  6. The Land Analysis System (LAS) for multispectral image processing

    USGS Publications Warehouse

    Wharton, S. W.; Lu, Y. C.; Quirk, Bruce K.; Oleson, Lyndon R.; Newcomer, J. A.; Irani, Frederick M.

    1988-01-01

    The Land Analysis System (LAS) is an interactive software system available in the public domain for the analysis, display, and management of multispectral and other digital image data. LAS provides over 240 applications functions and utilities, a flexible user interface, complete online and hard-copy documentation, extensive image-data file management, reformatting, conversion utilities, and high-level device independent access to image display hardware. The authors summarize the capabilities of the current release of LAS (version 4.0) and discuss plans for future development. Particular emphasis is given to the issue of system portability and the importance of removing and/or isolating hardware and software dependencies.

  7. US Astronomers Access to SIMBAD in Strasbourg

    NASA Technical Reports Server (NTRS)

    Oliversen, Ronald (Technical Monitor); Eichhorn, Guenther

    2004-01-01

    During the last year the US SIMBAD Gateway Project continued to provide services like user registration to the US users of the SIMBAD database in France. Currently there are over 4500 US users registered. We also provided user support by answering questions from users and handling requests for lost passwords when still necessary. Even though almost all users now access SIMBAD without a password, based on hostnames/IP addresses, there are still some users that need individual passwords. We continued to maintain the mirror copy of the SIMBAD database on a server at SAO. This allows much faster access for the US users. During the past year we again moved this mirror to a faster server to improve access for the US users. We again supported a demonstration of the SIMBAD database at the meeting of the American Astronomical Society in January. We provided support for the demonstration activities at the SIMBAD booth. We paid part of the fee for the SIMBAD demonstration. We continued to improve the cross-linking between the SIMBAD project and the Astrophysics Data System. This cross-linking between these systems is very much appreciated by the users of both the SIMBAD database and the ADS Abstract Service. The mirror of the SIMBAD database at SA0 makes this connection faster for the US astronomers. We exchange information between the ADS and SIMBAD on a daily basis. During the last year we also installed a mirror copy of the Vizier system from the CDS, in addition to the SIMBAD mirror.

  8. 75 FR 8400 - In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-24

    ... COMMISSION In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld Devices... importation, and the sale within the United States after importation of certain wireless communications system server software, wireless handheld devices and battery packs by reason of infringement of certain...

  9. Think They're Drunk? Alcohol Servers and the Identification of Intoxication.

    ERIC Educational Resources Information Center

    Burns, Edward D.; Nusbaumer, Michael R.; Reiling, Denise M.

    2003-01-01

    Examines practices used by servers to assess intoxication. The analysis was based upon questionnaires mailed to a random probability sample of licensed servers from one state (N = 822). Indicators found to be most important were examined in relation to a variety of occupational characteristics. Implications for training curricula, policy…

  10. Design and implementation of web server soft load balancing in small and medium-sized enterprise

    NASA Astrophysics Data System (ADS)

    Yan, Liu

    2011-12-01

    With the expansion of business scale, small and medium-sized enterprises began to use information platform to improve their management and competition ability, the server becomes the core factor which restricts the enterprise's infomationization construction. This paper puts forward a suitable design scheme for small and medium-sized enterprise web server soft load balancing, and proved it effective through experiment.

  11. Developing Server-Side Infrastructure for Large-Scale E-Learning of Web Technology

    ERIC Educational Resources Information Center

    Simpkins, Neil

    2010-01-01

    The growth of E-business has made experience in server-side technology an increasingly important area for educators. Server-side skills are in increasing demand and recognised to be of relatively greater value than comparable client-side aspects (Ehie, 2002). In response to this, many educational organisations have developed E-business courses,…

  12. A Disk-Based Storage Architecture for Movie on Demand Servers.

    ERIC Educational Resources Information Center

    Ozden, Banu; And Others

    1995-01-01

    Discusses movie on demand (MOD) servers, which are computer systems that store movies in compressed digital form for broadcast cable television systems. Highlights include network bandwidths, a disk-based storage architecture for a MOD server, implementing VCR (video cassette recorder) functions to movie viewing, and buffers. (LRW)

  13. Minimizing Thermal Stress for Data Center Servers through Thermal-Aware Relocation

    PubMed Central

    Ling, T. C.; Hussain, S. A.

    2014-01-01

    A rise in inlet air temperature may lower the rate of heat dissipation from air cooled computing servers. This introduces a thermal stress to these servers. As a result, the poorly cooled active servers will start conducting heat to the neighboring servers and giving rise to hotspot regions of thermal stress, inside the data center. As a result, the physical hardware of these servers may fail, thus causing performance loss, monetary loss, and higher energy consumption for cooling mechanism. In order to minimize these situations, this paper performs the profiling of inlet temperature sensitivity (ITS) and defines the optimum location for each server to minimize the chances of creating a thermal hotspot and thermal stress. Based upon novel ITS analysis, a thermal state monitoring and server relocation algorithm for data centers is being proposed. The contribution of this paper is bringing the peak outlet temperatures of the relocated servers closer to average outlet temperature by over 5 times, lowering the average peak outlet temperature by 3.5% and minimizing the thermal stress. PMID:24987743

  14. Usage of Thin-Client/Server Architecture in Computer Aided Education

    ERIC Educational Resources Information Center

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  15. 75 FR 43206 - In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-23

    ... COMMISSION In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld Devices... communications system server software, wireless handheld devices and battery packs by reason of infringement of..., 2010, based on a complaint filed by Motorola, Inc. (``Motorola'') of Schaumburg, Illinois. 75 FR...

  16. Design and Delivery of Multiple Server-Side Computer Languages Course

    ERIC Educational Resources Information Center

    Wang, Shouhong; Wang, Hai

    2011-01-01

    Given the emergence of service-oriented architecture, IS students need to be knowledgeable of multiple server-side computer programming languages to be able to meet the needs of the job market. This paper outlines the pedagogy of an innovative course of multiple server-side computer languages for the undergraduate IS majors. The paper discusses…

  17. Selection of Server-Side Technologies for an E-Business Curriculum

    ERIC Educational Resources Information Center

    Sandvig, J. Christopher

    2007-01-01

    The rapid growth of e-business and e-commerce has made server-side programming an increasingly important topic in information systems (IS) and computer science (CS) curricula. This article presents an overview of the major features of several popular server-side programming technologies and discusses the factors that influence the selection of…

  18. Using Web Server Logs to Track Users through the Electronic Forest

    ERIC Educational Resources Information Center

    Coombs, Karen A.

    2005-01-01

    This article analyzes server logs, providing helpful information in making decisions about Web-based services. The author indicates, as a result of analyzing server logs, several interesting things about the users' behavior were learned. The resulting findings are discussed in this article. Certain pages of the author's Web site, for instance, are…

  19. The data model of a PACS-based DICOM radiation therapy server

    NASA Astrophysics Data System (ADS)

    Law, Maria Y. Y.; Huang, H. K.; Zhang, Xiaoyan; Zhang, Jianguo

    2003-05-01

    Radiotherapy (RT) requires information and images from both diagnostic and treatment equipment. Standards for radiotherapy information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions). However, the contents of these objects require the incorporation of the RT workflow in a logical sequence. The first step is to trace the RT workflow. The second step now is to direct all images and related information in their corresponding DICOM-RT objects into a DICOM RT Server and then ultimately to an RT application server. Methods: In our design, the RT DICOM Server was based on a PACS data model. The data model can be translated to web-based technology server and an application server built on top of the Web server for RT. In the process, the contents in each of the DICOM-RT objects were customized for the RT display windows. Results: Six display windows were designed and the data model in the RT application server was developed. The images and related information were grouped into the seven DICOM-RT Objects in the sequence of their procedures, and customized for the seven display windows. This is an important step in organizing the data model in the application server for radiation therapy. Conclusion: Radiation therapy workflow study is a pre-requisite for data model design that can enhance image-based healthcare delivery.

  20. aLeaves facilitates on-demand exploration of metazoan gene family trees on MAFFT sequence alignment server with enhanced interactivity

    PubMed Central

    Kuraku, Shigehiro; Zmasek, Christian M.; Nishimura, Osamu; Katoh, Kazutaka

    2013-01-01

    We report a new web server, aLeaves (http://aleaves.cdb.riken.jp/), for homologue collection from diverse animal genomes. In molecular comparative studies involving multiple species, orthology identification is the basis on which most subsequent biological analyses rely. It can be achieved most accurately by explicit phylogenetic inference. More and more species are subjected to large-scale sequencing, but the resultant resources are scattered in independent project-based, and multi-species, but separate, web sites. This complicates data access and is becoming a serious barrier to the comprehensiveness of molecular phylogenetic analysis. aLeaves, launched to overcome this difficulty, collects sequences similar to an input query sequence from various data sources. The collected sequences can be passed on to the MAFFT sequence alignment server (http://mafft.cbrc.jp/alignment/server/), which has been significantly improved in interactivity. This update enables to switch between (i) sequence selection using the Archaeopteryx tree viewer, (ii) multiple sequence alignment and (iii) tree inference. This can be performed as a loop until one reaches a sensible data set, which minimizes redundancy for better visibility and handling in phylogenetic inference while covering relevant taxa. The work flow achieved by the seamless link between aLeaves and MAFFT provides a convenient online platform to address various questions in zoology and evolutionary biology. PMID:23677614

  1. aLeaves facilitates on-demand exploration of metazoan gene family trees on MAFFT sequence alignment server with enhanced interactivity.

    PubMed

    Kuraku, Shigehiro; Zmasek, Christian M; Nishimura, Osamu; Katoh, Kazutaka

    2013-07-01

    We report a new web server, aLeaves (http://aleaves.cdb.riken.jp/), for homologue collection from diverse animal genomes. In molecular comparative studies involving multiple species, orthology identification is the basis on which most subsequent biological analyses rely. It can be achieved most accurately by explicit phylogenetic inference. More and more species are subjected to large-scale sequencing, but the resultant resources are scattered in independent project-based, and multi-species, but separate, web sites. This complicates data access and is becoming a serious barrier to the comprehensiveness of molecular phylogenetic analysis. aLeaves, launched to overcome this difficulty, collects sequences similar to an input query sequence from various data sources. The collected sequences can be passed on to the MAFFT sequence alignment server (http://mafft.cbrc.jp/alignment/server/), which has been significantly improved in interactivity. This update enables to switch between (i) sequence selection using the Archaeopteryx tree viewer, (ii) multiple sequence alignment and (iii) tree inference. This can be performed as a loop until one reaches a sensible data set, which minimizes redundancy for better visibility and handling in phylogenetic inference while covering relevant taxa. The work flow achieved by the seamless link between aLeaves and MAFFT provides a convenient online platform to address various questions in zoology and evolutionary biology. PMID:23677614

  2. DR_bind: a web server for predicting DNA-binding residues from the protein structure based on electrostatics, evolution and geometry.

    PubMed

    Chen, Yao Chi; Wright, Jon D; Lim, Carmay

    2012-07-01

    DR_bind is a web server that automatically predicts DNA-binding residues, given the respective protein structure based on (i) electrostatics, (ii) evolution and (iii) geometry. In contrast to machine-learning methods, DR_bind does not require a training data set or any parameters. It predicts DNA-binding residues by detecting a cluster of conserved, solvent-accessible residues that are electrostatically stabilized upon mutation to Asp(-)/Glu(-). The server requires as input the DNA-binding protein structure in PDB format and outputs a downloadable text file of the predicted DNA-binding residues, a 3D visualization of the predicted residues highlighted in the given protein structure, and a downloadable PyMol script for visualization of the results. Calibration on 83 and 55 non-redundant DNA-bound and DNA-free protein structures yielded a DNA-binding residue prediction accuracy/precision of 90/47% and 88/42%, respectively. Since DR_bind does not require any training using protein-DNA complex structures, it may predict DNA-binding residues in novel structures of DNA-binding proteins resulting from structural genomics projects with no conservation data. The DR_bind server is freely available with no login requirement at http://dnasite.limlab.ibms.sinica.edu.tw.

  3. aLeaves facilitates on-demand exploration of metazoan gene family trees on MAFFT sequence alignment server with enhanced interactivity.

    PubMed

    Kuraku, Shigehiro; Zmasek, Christian M; Nishimura, Osamu; Katoh, Kazutaka

    2013-07-01

    We report a new web server, aLeaves (http://aleaves.cdb.riken.jp/), for homologue collection from diverse animal genomes. In molecular comparative studies involving multiple species, orthology identification is the basis on which most subsequent biological analyses rely. It can be achieved most accurately by explicit phylogenetic inference. More and more species are subjected to large-scale sequencing, but the resultant resources are scattered in independent project-based, and multi-species, but separate, web sites. This complicates data access and is becoming a serious barrier to the comprehensiveness of molecular phylogenetic analysis. aLeaves, launched to overcome this difficulty, collects sequences similar to an input query sequence from various data sources. The collected sequences can be passed on to the MAFFT sequence alignment server (http://mafft.cbrc.jp/alignment/server/), which has been significantly improved in interactivity. This update enables to switch between (i) sequence selection using the Archaeopteryx tree viewer, (ii) multiple sequence alignment and (iii) tree inference. This can be performed as a loop until one reaches a sensible data set, which minimizes redundancy for better visibility and handling in phylogenetic inference while covering relevant taxa. The work flow achieved by the seamless link between aLeaves and MAFFT provides a convenient online platform to address various questions in zoology and evolutionary biology.

  4. Justifying the need for forensically ready protocols: A case study of identifying malicious web servers using client honeypots

    SciTech Connect

    Seifert, Christian; Endicott-Popovsky, Barbara E.; Frincke, Deborah A.; Komisarczuk, Peter; Muschevici, Radu; Welch, Ian D.

    2008-01-03

    Abstract: Client honeypot technology can find malicious web servers that attack web browsers and push malware, so called drive-by-downloads, to the client machine. Merely recording the network traffic is insufficient to perform an efficient forensic analysis of the attack. Custom tools need to be developed to access and examine the embedded data of the network protocols. Once the information is extracted from the network data, it cannot be used to perform a behavioral analysis on the attack, therefore limiting the ability to answer what exactly happened on the attacked system. Implementation of a record/ replay mechanism is proposed that allows the forensic examiner to easily extract application data from recorded network streams and allows applications to interact with such data for behavioral analysis purposes. A concrete implementation of such a setup for HTTP and DNS protocols using the HTTP proxy Squid and DNS proxy pdnsd is presented and its effect on digital forensic analysis demonstrated.

  5. BioDataServer: a SQL-based service for the online integration of life science data.

    PubMed

    Freier, Andreas; Hofestädt, Ralf; Lange, Matthias; Scholz, Uwe; Stephanik, Andreas

    2002-01-01

    Regarding molecular biology, we see an exponential growth of data and knowledge. Among others, this fact is reflected in more than 300 molecular databases which are readily available on the Internet. The usage of these data requires integration tools capable of complex information fusion processes. This paper will present a novel concept for user specific integration of life science data. Our approach is based on a mediator architecture in conjunction with freely adjustable data schemes. The implemented prototype is called BioDataServer and can be accessed on the Internet: http://integration.genophen.de. To realize a comfortable usage of the resulted data sets of the integration process, a SQL-based query language and a XML data format were developed and implemented.

  6. Secure Dynamic access control scheme of PHR in cloud computing.

    PubMed

    Chen, Tzer-Shyong; Liu, Chia-Hui; Chen, Tzer-Long; Chen, Chin-Sheng; Bau, Jian-Guo; Lin, Tzu-Ching

    2012-12-01

    With the development of information technology and medical technology, medical information has been developed from traditional paper records into electronic medical records, which have now been widely applied. The new-style medical information exchange system "personal health records (PHR)" is gradually developed. PHR is a kind of health records maintained and recorded by individuals. An ideal personal health record could integrate personal medical information from different sources and provide complete and correct personal health and medical summary through the Internet or portable media under the requirements of security and privacy. A lot of personal health records are being utilized. The patient-centered PHR information exchange system allows the public autonomously maintain and manage personal health records. Such management is convenient for storing, accessing, and sharing personal medical records. With the emergence of Cloud computing, PHR service has been transferred to storing data into Cloud servers that the resources could be flexibly utilized and the operation cost can be reduced. Nevertheless, patients would face privacy problem when storing PHR data into Cloud. Besides, it requires a secure protection scheme to encrypt the medical records of each patient for storing PHR into Cloud server. In the encryption process, it would be a challenge to achieve accurately accessing to medical records and corresponding to flexibility and efficiency. A new PHR access control scheme under Cloud computing environments is proposed in this study. With Lagrange interpolation polynomial to establish a secure and effective PHR information access scheme, it allows to accurately access to PHR with security and is suitable for enormous multi-users. Moreover, this scheme also dynamically supports multi-users in Cloud computing environments with personal privacy and offers legal authorities to access to PHR. From security and effectiveness analyses, the proposed PHR access

  7. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    NASA Astrophysics Data System (ADS)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  8. Access control mechanisms for distributed healthcare environments.

    PubMed

    Sergl-Pommerening, Marita

    2004-01-01

    Today's IT-infrastructure provides more and more possibilities to share electronic patient data across several healthcare organizations and hospital departments. A strong requirement is sufficient data protection and security measures complying with the medical confidentiality and the data protection laws of each state or country like the European directive on data protection or the U.S. HIPAA privacy rule. In essence, the access control mechanisms and authorization structures of information systems must be able to realize the Need-To-Access principle. This principle can be understood as a set of context-sensitive access rules, regarding the patient's path across the organizations. The access control mechanisms of today's health information systems do not sufficiently satisfy this requirement, because information about participation of persons or organizations is not available within each system in a distributed environment. This problem could be solved by appropriate security services. The CORBA healthcare domain standard contains such a service for obtaining authorization decisions and administrating access decision policies (RAD). At the university hospital of Mainz we have developed an access control system (MACS), which includes the main functionality of the RAD specification and the access control logic that is needed for such a service. The basic design principles of our approach are role-based authorization, user rights with static and dynamic authorization data, context rules and the separation of three cooperating servers that provide up-to-date knowledge about users, roles and responsibilities. This paper introduces the design principles and the system design and critically evaluates the concepts based on practical experience.

  9. Use of World Wide Web server and browser software to support a first-year medical physiology course.

    PubMed

    Davis, M J; Wythe, J; Rozum, J S; Gore, R W

    1997-06-01

    We describe the use of a World Wide Web (Web) server to support a team-taught physiology course for first-year medical students. Our objectives were to reduce the number of formal lecture hours and enhance student enthusiasm by using more multimedia materials and creating opportunities for interactive learning. On-line course materials, consisting of administrative documents, lecture notes, animations, digital movies, practice tests, and grade reports, were placed on a departmental computer with an Internet connection. Students used Web browsers to access on-line materials from a variety of computing platforms on campus, at home, and at remote sites. To assess use of the materials and their effectiveness, we analyzed 1) log files from the server, and 2) the results of a written course evaluation completed by all students. Lecture notes and practice tests were the most-used documents. The students' evaluations indicated that computer use in class made the lecture material more interesting, while the on-line documents helped reinforce lecture materials and the textbook. We conclude that the effectiveness of on-line materials depends on several different factors, including 1) the number of instructors that provide materials; 2) the quantity of other materials handed out; 3) the degree to which computer use is demonstrated in class and integrated into lectures; and 4) the ease with which students can access the materials. Finally, we propose that additional implementation of Internet-based resources beyond what we have described would further enhance a physiology course for first-year medical students.

  10. File access prediction using neural networks.

    PubMed

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  11. BioMart Central Portal--unified access to biological data.

    PubMed

    Haider, Syed; Ballester, Benoit; Smedley, Damian; Zhang, Junjun; Rice, Peter; Kasprzyk, Arek

    2009-07-01

    BioMart Central Portal (www.biomart.org) offers a one-stop shop solution to access a wide array of biological databases. These include major biomolecular sequence, pathway and annotation databases such as Ensembl, Uniprot, Reactome, HGNC, Wormbase and PRIDE; for a complete list, visit, http://www.biomart.org/biomart/martview. Moreover, the web server features seamless data federation making cross querying of these data sources in a user friendly and unified way. The web server not only provides access through a web interface (MartView), it also supports programmatic access through a Perl API as well as RESTful and SOAP oriented web services. The website is free and open to all users and there is no login requirement.

  12. viwish: a visualization server for protein modelling and docking.

    PubMed

    Klein, T; Ackermann, F; Posch, S

    1996-12-12

    A visualization tool viwish for proteins based on the Tcl command language has been developed. The system is completely menu driven and can display arbitrary many proteins in arbitrary many windows. It isinstantly t o use, even for non computer experts and provides possibilities to modify menus, configurations, and windows. It may be used as a stand-alone molecular graphics package or as a graphics server for external programs. Communications with these client applications is established even across different machines (through the send command to Tk, an extension of Tcl). In addition, a wide rage of chemical data like molecular surfaces and 3D gridded samplings of chemical features can be displayed. Therefore the systmen is especially useful for the development of algorithms that need visual distributed freely, including the source code.

  13. Optimal routing of IP packets to multi-homed servers

    SciTech Connect

    Swartz, K.L.

    1992-08-01

    Multi-homing, or direct attachment to multiple networks, offers both performance and availability benefits for important servers on busy networks. Exploiting these benefits to their fullest requires a modicum of routing knowledge in the clients. Careful policy control must also be reflected in the routing used within the network to make best use of specialized and often scarce resources. While relatively straightforward in theory, this problem becomes much more difficult to solve in a real network containing often intractable implementations from a variety of vendors. This paper presents an analysis of the problem and proposes a useful solution for a typical campus network. Application of this solution at the Stanford Linear Accelerator Center is studied and the problems and pitfalls encountered are discussed, as are the workarounds used to make the system work in the real world.

  14. World wide web implementation of the Langley technical report server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.

    1994-01-01

    On January 14, 1993, NASA Langley Research Center (LaRC) made approximately 130 formal, 'unclassified, unlimited' technical reports available via the anonymous FTP Langley Technical Report Server (LTRS). LaRC was the first organization to provide a significant number of aerospace technical reports for open electronic dissemination. LTRS has been successful in its first 18 months of operation, with over 11,000 reports distributed and has helped lay the foundation for electronic document distribution for NASA. The availability of World Wide Web (WWW) technology has revolutionized the Internet-based information community. This paper describes the transition of LTRS from a centralized FTP site to a distributed data model using the WWW, and suggests how the general model for LTRS can be applied to other similar systems.

  15. Utilization of Virtual Server Technology in Mission Operations

    NASA Technical Reports Server (NTRS)

    Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.

    2010-01-01

    Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.

  16. GlusterFS One Storage Server to Rule Them All

    SciTech Connect

    Boyer, Eric B.; Broomfield, Matthew C.; Perrotti, Terrell A.

    2012-07-30

    GlusterFS is a Linux based distributed file system, designed to be highly scalable and serve many clients. Some reasons to use GlusterFS are: No centralized metadata server, Scalability, Open Source, Dynamic and live service modifications, Can be used over Infiniband or Ethernet, Can be tuned for speed and/or resilience and Flexible administration. It's useful for enterprise environments - virtualization; high performance computing (HPC) and it works with Mac, Linux and Windows clients. Conclusions are: (1) GlusterFS proved to have widespread capabilities as a virtual file system; (2) Scalability is very dependent upon the underlying hardware; (3) Lack of built-in encryption and security paradigm; and (4) Best suited in a general purpose computing environment.

  17. Scripps Genome ADVISER: Annotation and Distributed Variant Interpretation SERver

    PubMed Central

    Pham, Phillip H.; Shipman, William J.; Erikson, Galina A.; Schork, Nicholas J.; Torkamani, Ali

    2015-01-01

    Interpretation of human genomes is a major challenge. We present the Scripps Genome ADVISER (SG-ADVISER) suite, which aims to fill the gap between data generation and genome interpretation by performing holistic, in-depth, annotations and functional predictions on all variant types and effects. The SG-ADVISER suite includes a de-identification tool, a variant annotation web-server, and a user interface for inheritance and annotation-based filtration. SG-ADVISER allows users with no bioinformatics expertise to manipulate large volumes of variant data with ease – without the need to download large reference databases, install software, or use a command line interface. SG-ADVISER is freely available at genomics.scripps.edu/ADVISER. PMID:25706643

  18. GWFASTA: server for FASTA search in eukaryotic and microbial genomes.

    PubMed

    Issac, Biju; Raghava, G P S

    2002-09-01

    Similarity searches are a powerful method for solving important biological problems such as database scanning, evolutionary studies, gene prediction, and protein structure prediction. FASTA is a widely used sequence comparison tool for rapid database scanning. Here we describe the GWFASTA server that was developed to assist the FASTA user in similarity searches against partially and/or completely sequenced genomes. GWFASTA consists of more than 60 microbial genomes, eight eukaryote genomes, and proteomes of annotatedgenomes. Infact, it provides the maximum number of databases for similarity searching from a single platform. GWFASTA allows the submission of more than one sequence as a single query for a FASTA search. It also provides integrated post-processing of FASTA output, including compositional analysis of proteins, multiple sequences alignment, and phylogenetic analysis. Furthermore, it summarizes the search results organism-wise for prokaryotes and chromosome-wise for eukaryotes. Thus, the integration of different tools for sequence analyses makes GWFASTA a powerful toolfor biologists. PMID:12238765

  19. Optimal Routing in General Finite Multi-Server Queueing Networks

    PubMed Central

    van Woensel, Tom; Cruz, Frederico R. B.

    2014-01-01

    The design of general finite multi-server queueing networks is a challenging problem that arises in many real-life situations, including computer networks, manufacturing systems, and telecommunication networks. In this paper, we examine the optimal routing problem in arbitrary configured acyclic queueing networks. The performance of the finite queueing network is evaluated with a known approximate performance evaluation method and the optimization is done by means of a heuristics based on the Powell algorithm. The proposed methodology is then applied to determine the optimal routing probability vector that maximizes the throughput of the queueing network. We show numerical results for some networks to quantify the quality of the routing vector approximations obtained. PMID:25010660

  20. Intro and Recent Advances: Remote Data Access via OPeNDAP Web Services

    NASA Technical Reports Server (NTRS)

    Fulker, David

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters. Topics 2 through 7 will be relevant to data consumers, data providers and notably, due to the open-source nature of all OPeNDAP software to developers wishing to extend Hyrax, to build compatible clients and servers, and/or to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  1. Distributed Digital Survey Logbook Built on GeoServer and PostGIS

    NASA Astrophysics Data System (ADS)

    Jovicic, Aleksandar; Castelli, Ana; Kljajic, Zoran

    2013-04-01

    display of ship position. If vessel is equipped with Internet link, real-time situation can be distributed to expert on land, who can monitor progress and advise chief-scientist how to overcome issues. Each scientist can setup own pre-defined events, and trigger it by one click, or use free-text button and write-down note. Timestamp of event is recorded and in case that triggering was delayed (e.g. person was occupied with equipment preparation), time-delay modifier is available. Position of event is marked based on recorded timestamp, so all events that happens at single station can be shown on chart. Events can be filtered by contributor, so each team can get view of own stations only. ETA at next station and planned activities there are also shown, so crew can better estimate moment when need to start preparing equipment. Presented solution shows benefits that free software (e.g. GeoServer, PostGIS, OpenLayers, Geotools) produced according to OGC standards, brings to oceanographic community especially in decreasing of development time and providing multi-platform access. Applicability of such solutions is not limited only to on-board operations but can be easily extended to any task involving geospatial data.

  2. Land Analysis System (LAS)

    NASA Technical Reports Server (NTRS)

    Pease, P. B.

    1989-01-01

    Version 4.1 of LAS provides flexible framework for algorithm development and processing and analysis of image data. Over 500,000 lines of code enable image repair, clustering, classification, film processing, geometric registration, radiometric correction, and manipulation of image statistics.

  3. Developing and Marketing a Client/Server-Based Data Warehouse.

    ERIC Educational Resources Information Center

    Singleton, Michele; And Others

    1993-01-01

    To provide better access to information, the University of Arizona information technology center has designed a data warehouse accessible from the desktop computer. A team approach has proved successful in introducing and demonstrating a prototype to the campus community. (Author/MSE)

  4. Performance comparison of GES DISC data as a service between server-based system and cloud system

    NASA Astrophysics Data System (ADS)

    Pham, L.; Chen, A.; Winter, E.; Lynnes, C.

    2012-12-01

    The NASA Goddard Earth Science Data and Information Service Center (GES DISC), in cooperation with the Goddard Information Technology & Communications Directorate, demonstrates and evaluates provision of "Data-as-a-Service" in a cloud environment using the OPeNDAP (Open-source Project for a Network Data Access Protocol) protocols. The demonstration requires porting the OPeNDAP software to the cloud platform along with a representative set of data and then exercising the server using several clients. The evaluation examines two aspects of using open source software in the cloud to serve large volumes of satellite data for public access and simple subsetting: a) Ease of porting and operating OPeNDAP in the Goddard Cloud and Amazon Elastic Cloud Computing (EC2) and Simple Storage Service (S3) environments, including evaluation of the time needed to setup one instance; b) Access performance, e.g. data access stability and speed of the cloud environments as compared to existing GES DISC capabilities. Four kinds of satellite data products with different data formats (HDF4, HDF5) were selected as the test data: Advanced Infrared Sounder (AIRS) on the Aqua satellite, Ozone Monitoring Instrument (OMI) on the Aura satellite, Tropical Rainfall Measuring Mission (TRMM), and Modern-Era Retrospective Analysis for Research and Applications (MERRA). For each product, 25 granules were used to test access stability and speed. The Giovanni (GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure) and GrADS (Grid and Analysis System) data services were also deployed to the cloud platforms to compare the data analysis performance between existing systems and cloud systems. We also evaluated the challenges to migrating these services to the cloud architectures examined.

  5. Prototype of Multifunctional Full-text Library in the Architecture Web-browser / Web-server / SQL-server

    NASA Astrophysics Data System (ADS)

    Lyapin, Sergey; Kukovyakin, Alexey

    Within the framework of the research program "Textaurus" an operational prototype of multifunctional library T-Libra v.4.1. has been created which makes it possible to carry out flexible parametrizable search within a full-text database. The information system is realized in the architecture Web-browser / Web-server / SQL-server. This allows to achieve an optimal combination of universality and efficiency of text processing, on the one hand, and convenience and minimization of expenses for an end user (due to applying of a standard Web-browser as a client application), on the other one. The following principles underlie the information system: a) multifunctionality, b) intelligence, c) multilingual primary texts and full-text searching, d) development of digital library (DL) by a user ("administrative client"), e) multi-platform working. A "library of concepts", i.e. a block of functional models of semantic (concept-oriented) searching, as well as a subsystem of parametrizable queries to a full-text database, which is closely connected with the "library", serve as a conceptual basis of multifunctionality and "intelligence" of the DL T-Libra v.4.1. An author's paragraph is a unit of full-text searching in the suggested technology. At that, the "logic" of an educational / scientific topic or a problem can be built in a multilevel flexible structure of a query and the "library of concepts", replenishable by the developers and experts. About 10 queries of various level of complexity and conceptuality are realized in the suggested version of the information system: from simple terminological searching (taking into account lexical and grammatical paradigms of Russian) to several kinds of explication of terminological fields and adjustable two-parameter thematic searching (a [set of terms] and a [distance between terms] within the limits of an author's paragraph are such parameters correspondingly).

  6. Evolution of the Data Access Protocol in Response to Community Needs

    NASA Astrophysics Data System (ADS)

    Gallagher, J.; Caron, J. L.; Davis, E.; Fulker, D.; Heimbigner, D.; Holloway, D.; Howe, B.; Moe, S.; Potter, N.

    2012-12-01

    Under the aegis of the OPULS (OPeNDAP-Unidata Linked Servers) Project, funded by NOAA, version 2 of OPeNDAP's Data Access Protocol (DAP2) is being updated to version 4. DAP4 is the first major upgrade in almost two decades and will embody three main areas of advancement. First, the data-model extensions developed by the OPULS team focus on three areas: Better support for coverages, access to HDF5 files and access to relational databases. DAP2 support for coverages (defined as a sampled functions) was limited to simple rectangular coverages that work well for (some) model outputs and processed satellite data but that cannot represent trajectories or satellite swath data, for example. We have extended the coverage concept in DAP4 to remove these limitations. These changes are informed by work at Unidata on the Common Data Model and also by the OGC's abstract coverages specification. In a similar vein, we have extended DAP2's support for relations by including the concept of foreign keys, so that tables can be explicitly related to one another. Second, the web interfaces - web services - that provides access to data using via DAP will be more clearly defined and use other (, orthogonal), standards where they are appropriate. An important case is the XML interface, which provides a cleaner way to build other response media types such as JSON and RDF (for metadata) and to build support for Atom, thus simplify the integration of DAP servers with tools that support OpenSearch. Input from the ESIP federation and work performed with IOOS have informed our choices here. Last, DAP4-compliant servers will support richer data-processing capabilities than DAP2, enabling a wider array of server functions that manipulate data before returning values. Two projects currently are exploring just what can be done even with DAP2's server-function model: The MIIC project at LARC and OPULS itself (with work performed at the University of Washington). Both projects have demonstrated that

  7. Application-Defined Decentralized Access Control.

    PubMed

    Xu, Yuanzhong; Dunn, Alan M; Hofmann, Owen S; Lee, Michael Z; Mehdi, Syed Akbar; Witchel, Emmett

    2014-01-01

    DCAC is a practical OS-level access control system that supports application-defined principals. It allows normal users to perform administrative operations within their privilege, enabling isolation and privilege separation for applications. It does not require centralized policy specification or management, giving applications freedom to manage their principals while the policies are still enforced by the OS. DCAC uses hierarchically-named attributes as a generic framework for user-defined policies such as groups defined by normal users. For both local and networked file systems, its execution time overhead is between 0%-9% on file system microbenchmarks, and under 1% on applications. This paper shows the design and implementation of DCAC, as well as several real-world use cases, including sandboxing applications, enforcing server applications' security policies, supporting NFS, and authenticating user-defined sub-principals in SSH, all with minimal code changes.

  8. Application-Defined Decentralized Access Control

    PubMed Central

    Xu, Yuanzhong; Dunn, Alan M.; Hofmann, Owen S.; Lee, Michael Z.; Mehdi, Syed Akbar; Witchel, Emmett

    2014-01-01

    DCAC is a practical OS-level access control system that supports application-defined principals. It allows normal users to perform administrative operations within their privilege, enabling isolation and privilege separation for applications. It does not require centralized policy specification or management, giving applications freedom to manage their principals while the policies are still enforced by the OS. DCAC uses hierarchically-named attributes as a generic framework for user-defined policies such as groups defined by normal users. For both local and networked file systems, its execution time overhead is between 0%–9% on file system microbenchmarks, and under 1% on applications. This paper shows the design and implementation of DCAC, as well as several real-world use cases, including sandboxing applications, enforcing server applications’ security policies, supporting NFS, and authenticating user-defined sub-principals in SSH, all with minimal code changes. PMID:25426493

  9. Performance analysis for queueing systems with close down periods and server under maintenance

    NASA Astrophysics Data System (ADS)

    Krishna Kumar, B.; Anbarasu, S.; Lakshmi, S. R. Anantha

    2015-01-01

    A single server queue subject to maintenance of the server and the close down period is considered. We obtain explicit expressions for the transient probabilities of the system size, the server under maintenance state and the close down period. The time-dependent performance measures of the system and the probability density function of the first-passage-time to reach the maintenance state are discussed. The corresponding steady state analysis and key performance measures of the system are also presented. Finally, the effect of various parameters on system performance measures is demonstrated by a numerical example.

  10. Implementing a Physician's Workstation using client/server technology and the distributed computing environment.

    PubMed Central

    Pham, T. Q.; Young, C. Y.; Tang, P. C.; Suermondt, H. J.; Annevelink, J.

    1994-01-01

    PWS is a physician's workstation research prototype developed to explore the use of information management tools by physicians in the context of patient care. The original prototype was implemented in a client/server architecture using a broadcast message server. As we expanded the scope of the prototyping activities, we identified the limitations of the broadcast message server in the areas of scalability, security, and interoperability. To address these issues, we reimplemented PWS using the Open Software Foundation's Distributed Computing Environment (DCE). We describe the rationale for using DCE, the migration process, and the benefits achieved. Future work and recommendations are discussed. PMID:7950003

  11. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    NASA Astrophysics Data System (ADS)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  12. INTREPID: a web server for prediction of functionally important residues by evolutionary analysis.

    PubMed

    Sankararaman, Sriram; Kolaczkowski, Bryan; Sjölander, Kimmen

    2009-07-01

    We present the INTREPID web server for predicting functionally important residues in proteins. INTREPID has been shown to boost the recall and precision of catalytic residue prediction over other sequence-based methods and can be used to identify other types of functional residues. The web server takes an input protein sequence, gathers homologs, constructs a multiple sequence alignment and phylogenetic tree and finally runs the INTREPID method to assign a score to each position. Residues predicted to be functionally important are displayed on homologous 3D structures (where available), highlighting spatial patterns of conservation at various significance thresholds. The INTREPID web server is available at http://phylogenomics.berkeley.edu/intrepid.

  13. United States Access Board

    MedlinePlus

    ... Communications & IT Access to information and communication technology (ICT) is addressed by Board standards and guidelines issued ... Engineer (November 3) Access Board Approves Rules on ICT Refresh and Medical Diagnostic Equipment (September 14) Access ...

  14. Access Control for Mobile Assessment Systems Using ID.

    PubMed

    Nakayama, Masaharu; Ishii, Tadashi; Morino, Kazuma

    2015-01-01

    The assessment of shelters during disaster is critical to ensure the health of evacuees and prevent pandemic. In the Ishinomaki area, one of the areas most damaged by the Great East Japan Earthquake, the highly organized assessment helped to successfully manage a total of 328 shelters with a total of 46,480 evacuees. The input and analysis of vast amounts of data was tedious work for staff members. However, a web-based assessment system that utilized mobile devices was thought to decrease workload and standardize the evaluation form. The necessary access of information should be controlled in order to maintain individuals' privacy. We successfully developed an access control system using IDs. By utilizing a unique numerical ID, users can access the input form or assessment table. This avoids unnecessary queries to the server, resulting in a quick response and easy availability, even with poor internet connection. PMID:26262204

  15. NASA Access Mechanism - Graphical user interface information retrieval system

    NASA Technical Reports Server (NTRS)

    Hunter, Judy F.; Generous, Curtis; Duncan, Denise

    1993-01-01

    Access to online information sources of aerospace, scientific, and engineering data, a mission focus for NASA's Scientific and Technical Information Program, has always been limited by factors such as telecommunications, query language syntax, lack of standardization in the information, and the lack of adequate tools to assist in searching. Today, the NASA STI Program's NASA Access Mechanism (NAM) prototype offers a solution to these problems by providing the user with a set of tools that provide a graphical interface to remote, heterogeneous, and distributed information in a manner adaptable to both casual and expert users. Additionally, the NAM provides access to many Internet-based services such as Electronic Mail, the Wide Area Information Servers system, Peer Locating tools, and electronic bulletin boards.

  16. NASA access mechanism: Graphical user interface information retrieval system

    NASA Technical Reports Server (NTRS)

    Hunter, Judy; Generous, Curtis; Duncan, Denise

    1993-01-01

    Access to online information sources of aerospace, scientific, and engineering data, a mission focus for NASA's Scientific and Technical Information Program, has always been limited to factors such as telecommunications, query language syntax, lack of standardization in the information, and the lack of adequate tools to assist in searching. Today, the NASA STI Program's NASA Access Mechanism (NAM) prototype offers a solution to these problems by providing the user with a set of tools that provide a graphical interface to remote, heterogeneous, and distributed information in a manner adaptable to both casual and expert users. Additionally, the NAM provides access to many Internet-based services such as Electronic Mail, the Wide Area Information Servers system, Peer Locating tools, and electronic bulletin boards.

  17. Accessing Wind Tunnels From NASA's Information Power Grid

    NASA Technical Reports Server (NTRS)

    Becker, Jeff; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The NASA Ames wind tunnel customers are one of the first users of the Information Power Grid (IPG) storage system at the NASA Advanced Supercomputing Division. We wanted to be able to store their data on the IPG so that it could be accessed remotely in a secure but timely fashion. In addition, incorporation into the IPG allows future use of grid computational resources, e.g., for post-processing of data, or to do side-by-side CFD validation. In this paper, we describe the integration of grid data access mechanisms with the existing DARWIN web-based system that is used to access wind tunnel test data. We also show that the combined system has reasonable performance: wind tunnel data may be retrieved at 50Mbits/s over a 100 base T network connected to the IPG storage server.

  18. Access Control of Web and Java Based Applications

    NASA Technical Reports Server (NTRS)

    Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan

    2011-01-01

    Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.

  19. Accessing NASA Technology with the World Wide Web

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Bianco, David J.

    1995-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer and technology awareness applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology OPportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. The NASA Technical Report Server (NTRS) provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people.

  20. Rail Access to Yucca Mountain: Critical Issues

    SciTech Connect

    Halstead, R. J.; Dilger, F.; Moore, R. C.

    2003-02-25

    The proposed Yucca Mountain repository site currently lacks rail access. The nearest mainline railroad is almost 100 miles away. Absence of rail access could result in many thousands of truck shipments of spent nuclear fuel and high-level radioactive waste. Direct rail access to the repository could significantly reduce the number of truck shipments and total shipments. The U.S. Department of Energy (DOE) identified five potential rail access corridors, ranging in length from 98 miles to 323 miles, in the Final Environmental Impact Statement (FEIS) for Yucca Mountain. The FEIS also considers an alternative to rail spur construction, heavy-haul truck (HHT) delivery of rail casks from one of three potential intermodal transfer stations. The authors examine the feasibility and cost of the five rail corridors, and DOE's alternative proposal for HHT transport. The authors also address the potential for rail shipments through the Las Vegas metropolitan area.