Sample records for server electronic resource

  1. The Primary Care Electronic Library: RSS feeds using SNOMED-CT indexing for dynamic content delivery.

    PubMed

    Robinson, Judas; de Lusignan, Simon; Kostkova, Patty; Madge, Bruce; Marsh, A; Biniaris, C

    2006-01-01

    Rich Site Summary (RSS) feeds are a method for disseminating and syndicating the contents of a website using extensible mark-up language (XML). The Primary Care Electronic Library (PCEL) distributes recent additions to the site in the form of an RSS feed. When new resources are added to PCEL, they are manually assigned medical subject headings (MeSH terms), which are then automatically mapped to SNOMED-CT terms using the Unified Medical Language System (UMLS) Metathesaurus. The library is thus searchable using MeSH or SNOMED-CT. Our syndicate partner wished to have remote access to PCEL coronary heart disease (CHD) information resources based on SNOMED-CT search terms. To pilot the supply of relevant information resources in response to clinically coded requests, using RSS syndication for transmission between web servers. Our syndicate partner provided a list of CHD SNOMED-CT terms to its end-users, a list which was coded according to UMLS specifications. When the end-user requested relevant information resources, this request was relayed from our syndicate partner's web server to the PCEL web server. The relevant resources were retrieved from the PCEL MySQL database. This database is accessed using a server side scripting language (PHP), which enables the production of dynamic RSS feeds on the basis of Source Asserted Identifiers (CODEs) contained in UMLS. Retrieving resources using SNOMED-CT terms using syndication can be used to build a functioning application. The process from request to display of syndicated resources took less than one second. The results of the pilot illustrate that it is possible to exchange data between servers using RSS syndication. This method could be utilised dynamically to supply digital library resources to a clinical system with SNOMED-CT data used as the standard of reference.

  2. An ontology-based telemedicine tasks management system architecture.

    PubMed

    Nageba, Ebrahim; Fayn, Jocelyne; Rubel, Paul

    2008-01-01

    The recent developments in ambient intelligence and ubiquitous computing offer new opportunities for the design of advanced Telemedicine systems providing high quality services, anywhere, anytime. In this paper we present an approach for building an ontology-based task-driven telemedicine system. The architecture is composed of a task management server, a communication server and a knowledge base for enabling decision makings taking account of different telemedical concepts such as actors, resources, services and the Electronic Health Record. The final objective is to provide an intelligent management of the different types of available human, material and communication resources.

  3. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  4. Internet Connections to Mathematics Education.

    ERIC Educational Resources Information Center

    Day, Roger

    1995-01-01

    Presents Internet connections appropriate for mathematics education; provides descriptions of ways that mathematics educators can access electronic resources such as e-mail, discussion groups, gopher servers, and transfer of files; and provides hints and examples from classroom connections and professional applications. (18 references) (Author/MKR)

  5. Canute Rules the Waves?: Hope for E-Library Tools Facing the Challenge of the "Google Generation"

    ERIC Educational Resources Information Center

    Myhill, Martin

    2007-01-01

    Purpose: To consider the findings of a recent e-resources survey at the University of Exeter Library in the context of the dominance of web search engines in academia, balanced by the development of e-library tools such as the library OPAC, OpenURL resolvers, metasearch engines, LDAP and proxy servers, and electronic resource management modules.…

  6. Electronic document distribution: Design of the anonymous FTP Langley Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.

    1994-01-01

    An experimental electronic dissemination project, the Langley Technical Report Server (LTRS), has been undertaken to determine the feasibility of delivering Langley technical reports directly to the desktops of researchers worldwide. During the first six months, over 4700 accesses occurred and over 2400 technical reports were distributed. This usage indicates the high level of interest that researchers have in performing literature searches and retrieving technical reports at their desktops. The initial system was developed with existing resources and technology. The reports are stored as files on an inexpensive UNIX workstation and are accessible over the Internet. This project will serve as a foundation for ongoing projects at other NASA centers that will allow for greater access to NASA technical reports.

  7. 32 CFR 701.6 - Reading rooms.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., online documents, and Navy electronic reading rooms maintained by SECNAV/CNO, CMC, OGC, JAG and Echelon 2... servers, the Navy FOIA website provides a common gateway for all Navy online resources. To this end, DON... clearly unwarranted invasions of privacy, or competitive harm to business submitters. In appropriate cases...

  8. General Framework for Animal Food Safety Traceability Using GS1 and RFID

    NASA Astrophysics Data System (ADS)

    Cao, Weizhu; Zheng, Limin; Zhu, Hong; Wu, Ping

    GS1 is global traceability standard, which is composed by the encoding system (EAN/UCC, EPC), the data carriers identified automatically (bar codes, RFID), electronic data interchange standards (EDI, XML). RFID is a non-contact, multi-objective automatic identification technique. Tracing of source food, standardization of RFID tags, sharing of dynamic data are problems to solve urgently for recent traceability systems. The paper designed general framework for animal food safety traceability using GS1 and RFID. This framework uses RFID tags encoding with EPCglobal tag data standards. Each information server has access tier, business tier and resource tier. These servers are heterogeneous and distributed, providing user access interfaces by SOAP or HTTP protocols. For sharing dynamic data, discovery service and object name service are used to locate dynamic distributed information servers.

  9. Development of mobile platform integrated with existing electronic medical records.

    PubMed

    Kim, YoungAh; Kim, Sung Soo; Kang, Simon; Kim, Kyungduk; Kim, Jun

    2014-07-01

    This paper describes a mobile Electronic Medical Record (EMR) platform designed to manage and utilize the existing EMR and mobile application with optimized resources. We structured the mEMR to reuse services of retrieval and storage in mobile app environments that have already proven to have no problem working with EMRs. A new mobile architecture-based mobile solution was developed in four steps: the construction of a server and its architecture; screen layout and storyboard making; screen user interface design and development; and a pilot test and step-by-step deployment. This mobile architecture consists of two parts, the server-side area and the client-side area. In the server-side area, it performs the roles of service management for EMR and documents and for information exchange. Furthermore, it performs menu allocation depending on user permission and automatic clinical document architecture document conversion. Currently, Severance Hospital operates an iOS-compatible mobile solution based on this mobile architecture and provides stable service without additional resources, dealing with dynamic changes of EMR templates. The proposed mobile solution should go hand in hand with the existing EMR system, and it can be a cost-effective solution if a quality EMR system is operated steadily with this solution. Thus, we expect this example to be shared with hospitals that currently plan to deploy mobile solutions.

  10. Development of Mobile Platform Integrated with Existing Electronic Medical Records

    PubMed Central

    Kim, YoungAh; Kang, Simon; Kim, Kyungduk; Kim, Jun

    2014-01-01

    Objectives This paper describes a mobile Electronic Medical Record (EMR) platform designed to manage and utilize the existing EMR and mobile application with optimized resources. Methods We structured the mEMR to reuse services of retrieval and storage in mobile app environments that have already proven to have no problem working with EMRs. A new mobile architecture-based mobile solution was developed in four steps: the construction of a server and its architecture; screen layout and storyboard making; screen user interface design and development; and a pilot test and step-by-step deployment. This mobile architecture consists of two parts, the server-side area and the client-side area. In the server-side area, it performs the roles of service management for EMR and documents and for information exchange. Furthermore, it performs menu allocation depending on user permission and automatic clinical document architecture document conversion. Results Currently, Severance Hospital operates an iOS-compatible mobile solution based on this mobile architecture and provides stable service without additional resources, dealing with dynamic changes of EMR templates. Conclusions The proposed mobile solution should go hand in hand with the existing EMR system, and it can be a cost-effective solution if a quality EMR system is operated steadily with this solution. Thus, we expect this example to be shared with hospitals that currently plan to deploy mobile solutions. PMID:25152837

  11. A Study on Partnering Mechanism in B to B EC Server for Global Supply Chain Management

    NASA Astrophysics Data System (ADS)

    Kaihara, Toshiya

    B to B Electronic Commerce (EC) technology is now in progress and regarded as an information infrastructure for global business. As the number and diversity of EC participants grows at the agile environment, the complexity of purchasing from a vast and dynamic array of goods and services needs to be hidden from the end user. Putting the complexity into the EC system instead means providing flexible auction server for enabling commerce within different business units. Market mechanism could solve the product distribution problem in the auction server by allocating the scheduled resources according to market prices. In this paper, we propose a partnering mechanism for B to B EC with market-oriented programming that mediates amongst unspecified various companies in the trade, and demonstrate the applicability of the economic analysis to this framework after constructing a primitive EC server. The proposed mechanism facilitates sophisticated B to B EC, which conducts a Pareto optimal solution for all the participating business units in the coming agile era.

  12. A Fast Healthcare Interoperability Resources (FHIR) layer implemented over i2b2.

    PubMed

    Boussadi, Abdelali; Zapletal, Eric

    2017-08-14

    Standards and technical specifications have been developed to define how the information contained in Electronic Health Records (EHRs) should be structured, semantically described, and communicated. Current trends rely on differentiating the representation of data instances from the definition of clinical information models. The dual model approach, which combines a reference model (RM) and a clinical information model (CIM), sets in practice this software design pattern. The most recent initiative, proposed by HL7, is called Fast Health Interoperability Resources (FHIR). The aim of our study was to investigate the feasibility of applying the FHIR standard to modeling and exposing EHR data of the Georges Pompidou European Hospital (HEGP) integrating biology and the bedside (i2b2) clinical data warehouse (CDW). We implemented a FHIR server over i2b2 to expose EHR data in relation with five FHIR resources: DiagnosisReport, MedicationOrder, Patient, Encounter, and Medication. The architecture of the server combines a Data Access Object design pattern and FHIR resource providers, implemented using the Java HAPI FHIR API. Two types of queries were tested: query type #1 requests the server to display DiagnosticReport resources, for which the diagnosis code is equal to a given ICD-10 code. A total of 80 DiagnosticReport resources, corresponding to 36 patients, were displayed. Query type #2, requests the server to display MedicationOrder, for which the FHIR Medication identification code is equal to a given code expressed in a French coding system. A total of 503 MedicationOrder resources, corresponding to 290 patients, were displayed. Results were validated by manually comparing the results of each request to the results displayed by an ad-hoc SQL query. We showed the feasibility of implementing a Java layer over the i2b2 database model to expose data of the CDW as a set of FHIR resources. An important part of this work was the structural and semantic mapping between the i2b2 model and the FHIR RM. To accomplish this, developers must manually browse the specifications of the FHIR standard. Our source code is freely available and can be adapted for use in other i2b2 sites.

  13. Delivering Electronic Resources with Web OPACs and Other Web-based Tools: Needs of Reference Librarians.

    ERIC Educational Resources Information Center

    Bordeianu, Sever; Carter, Christina E.; Dennis, Nancy K.

    2000-01-01

    Describes Web-based online public access catalogs (Web OPACs) and other Web-based tools as gateway methods for providing access to library collections. Addresses solutions for overcoming barriers to information, such as through the implementation of proxy servers and other authentication tools for remote users. (Contains 18 references.)…

  14. A strategy for providing electronic library services to members of the AGATE Consortium

    NASA Technical Reports Server (NTRS)

    Thompson, J. Garth

    1995-01-01

    In November, 1992, NASA Administrator Daniel Goldin established a Task Force to evaluate conditions which have lead to the precipitous decline of the US General Aviation System and to recommend actions needed to re-establish US leadership in General Aviation. The Task Force Report and a report by Dr. Bruce J. Holmes, Manager of the General Aviation/Commuter Office at NASA Langley Research Center provided the directions for the formation of the Advanced General Aviation Transport Experiments (AGATE), a consortium of government, industry and university committed to the revitalization of the US General Aviation Industry. One of the recommendations of the Task Force Report was that 'a central repository of information should be created to disseminate NASA research as well as other domestic and foreign aeronautical research that has been accomplished, is ongoing or is planned... A user friendly environment should be created.' This paper describes technical and logistic issues and recommends a plan for providing technical information to members of the AGATE Consortium. It is recommended that the General Aviation office establish and maintain an electronic literature page on the AGATE server. This page should provide a user friendly interface to existing technical report and index servers identified in the report and listed in the Recommendations section. A page should also be provided which gives links to Web resources. A list of specific resources is provided in the Recommendations section. Links should also be provided to a page with tips on searching, a form to provide for feedback and suggestions from users for other resources. Finally, a page should be maintained which provides pointers to other resources like the LaRCsim workstation simulation software which is avail from LaRC at no cost. The developments of the Web is very dynamic. These developments should be monitored regularly by the GA staff and links to additional resources should be provided on the server as they become available. An recommendation to NASA Headquarters should be made to establish a logically central access to all of the NASA Technical Libraries, to make these resources available both to all NASA employees and to the AGATE Consortium.

  15. Software for Building Models of 3D Objects via the Internet

    NASA Technical Reports Server (NTRS)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  16. Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?

  17. Ecoupling server: A tool to compute and analyze electronic couplings.

    PubMed

    Cabeza de Vaca, Israel; Acebes, Sandra; Guallar, Victor

    2016-07-05

    Electron transfer processes are often studied through the evaluation and analysis of the electronic coupling (EC). Since most standard QM codes do not provide readily such a measure, additional, and user-friendly tools to compute and analyze electronic coupling from external wave functions will be of high value. The first server to provide a friendly interface for evaluation and analysis of electronic couplings under two different approximations (FDC and GMH) is presented in this communication. Ecoupling server accepts inputs from common QM and QM/MM software and provides useful plots to understand and analyze the results easily. The web server has been implemented in CGI-python using Apache and it is accessible at http://ecouplingserver.bsc.es. Ecoupling server is free and open to all users without login. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. The anatomy of a World Wide Web library service: the BONES demonstration project. Biomedically Oriented Navigator of Electronic Services.

    PubMed Central

    Schnell, E H

    1995-01-01

    In 1994, the John A. Prior Health Sciences Library at Ohio State University began to develop a World Wide Web demonstration project, the Biomedically Oriented Navigator of Electronic Services (BONES). The initial intent of BONES was to facilitate the health professional's access to Internet resources by organizing them in a systematic manner. The project not only met this goal but also helped identify the resources needed to launch a full-scale Web library service. This paper discusses the tasks performed and resources used in the development of BONES and describes the creation and organization of documents on the BONES Web server. The paper also discusses the outcomes of the project and the impact on the library's staff and services. PMID:8547903

  19. Innovation Incubator: LiquidCool Solutions Technical Evaluation. Laboratory Study and Demonstration Results of a Directed-Flow, Liquid Submerged Server for High-Efficiency Data Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozubal, Eric J

    LiquidCool Solutions (LCS) has developed liquid submerged server (LSS) technology that changes the way computer electronics are cooled. The technology provides an option to cool electronics by the direct contact flow of dielectric fluid (coolant) into a sealed enclosure housing all the electronics of a single server. The intimate dielectric fluid contact with electronics improves the effectiveness of heat removal from the electronics.

  20. Optimal Resource Allocation under Fair QoS in Multi-tier Server Systems

    NASA Astrophysics Data System (ADS)

    Akai, Hirokazu; Ushio, Toshimitsu; Hayashi, Naoki

    Recent development of network technology realizes multi-tier server systems, where several tiers perform functionally different processing requested by clients. It is an important issue to allocate resources of the systems to clients dynamically based on their current requests. On the other hand, Q-RAM has been proposed for resource allocation in real-time systems. In the server systems, it is important that execution results of all applications requested by clients are the same QoS(quality of service) level. In this paper, we extend Q-RAM to multi-tier server systems and propose a method for optimal resource allocation with fairness of the QoS levels of clients’ requests. We also consider an assignment problem of physical machines to be sleep in each tier sothat the energy consumption is minimized.

  1. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    NASA Astrophysics Data System (ADS)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  2. A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.

    PubMed

    Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis

    2012-07-01

    The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.

  3. Resource conservation approached with an appropriate collection and upgrade-remanufacturing for used electronic products.

    PubMed

    Zlamparet, Gabriel I; Tan, Quanyin; Stevels, A B; Li, Jinhui

    2018-03-01

    This comparative research represents an example for a better conservation of resources by reducing the amount of waste (kg) and providing it more value under the umbrella of remanufacturing. The three discussed cases will expose three issues already addressed separately in the literature. The generation of waste electrical and electronic equipment (WEEE) interacts with the environmental depletion. In this article, we gave the examples of addressed issues under the concept of remanufacturing. Online collection opportunity eliminating classical collection, a business to business (B2B) implementation for remanufactured servers and medical devices. The material reuse (recycling), component sustainability, reuse (part harvesting), product reuse (after repair/remanufacturing) indicates the recovery potential using remanufacturing tool for a better conservation of resources adding more value to the products. Our findings can provide an overview of new system organization for the general collection, market potential and the technological advantages using remanufacturing instead of recycling of WEEE or used electrical and electronic equipment. Copyright © 2017. Published by Elsevier Ltd.

  4. Get the Word Out with List Servers

    ERIC Educational Resources Information Center

    Goldberg, Laurence

    2006-01-01

    In this article, the author details the use of electronic mail server in their school. In their school district of about 7,300 students in suburban Philadelphia (Abington SD), electronic mail list servers are now being used, along with other methods of communication, to disseminate information quickly and widely. They began by manually maintaining…

  5. Comparison of approaches for mobile document image analysis using server supported smartphones

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  6. Server-Based and Server-Less Byod Solutions to Support Electronic Learning

    DTIC Science & Technology

    2016-06-01

    Knowledge Online NSD National Security Directive OS operating system OWA Outlook Web Access PC personal computer PED personal electronic device PDA...mobile devices, institute mobile device policies and standards, and promote the development and use of DOD mobile and web -enabled applications” (DOD...with an isolated BYOD web server, properly educated system administrators must carry out and execute the necessary, pre-defined network security

  7. The Czech National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  8. Information resources assessment of a healthcare integrated delivery system.

    PubMed Central

    Gadd, C. S.; Friedman, C. P.; Douglas, G.; Miller, D. J.

    1999-01-01

    While clinical healthcare systems may have lagged behind computer applications in other fields in the shift from mainframes to client-server architectures, the rapid deployment of newer applications is closing that gap. Organizations considering the transition to client-server must identify and position themselves to provide the resources necessary to implement and support the infrastructure requirements of client-server architectures and to manage the accelerated complexity at the desktop, including hardware and software deployment, training, and maintenance needs. This paper describes an information resources assessment of the recently aligned Pennsylvania regional Veterans Administration Stars and Stripes Health Network (VISN4), in anticipation of the shift from a predominantly mainframe to a client-server information systems architecture in its well-established VistA clinical information system. The multimethod assessment study is described here to demonstrate this approach and its value to regional healthcare networks undergoing organizational integration and/or significant information technology transformations. PMID:10566414

  9. Worldwide telemedicine services based on distributed multimedia electronic patient records by using the second generation Web server hyperwave.

    PubMed

    Quade, G; Novotny, J; Burde, B; May, F; Beck, L E; Goldschmidt, A

    1999-01-01

    A distributed multimedia electronic patient record (EPR) is a central component of a medicine-telematics application that supports physicians working in rural areas of South America, and offers medical services to scientists in Antarctica. A Hyperwave server is used to maintain the patient record. As opposed to common web servers--and as a second generation web server--Hyperwave provides the capability of holding documents in a distributed web space without the problem of broken links. This enables physicians to browse through a patient's record by using a standard browser even if the patient's record is distributed over several servers. The patient record is basically implemented on the "Good European Health Record" (GEHR) architecture.

  10. Implementation of a World Wide Web server for the oil and gas industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaylock, R.E.; Martin, F.D.; Emery, R.

    1995-12-31

    The Gas and Oil Technology Exchange and Communication Highway, (GO-TECH), provides an electronic information system for the petroleum community for the purpose of exchanging ideas, data, and technology. The personal computer-based system fosters communication and discussion by linking oil and gas producers with resource centers, government agencies, consulting firms, service companies, national laboratories, academic research groups, and universities throughout the world. The oil and gas producers are provided access to the GO-TECH World Wide Web home page via modem links, as well as Internet. The future GO-TECH applications will include the establishment of{open_quote}Virtual corporations {close_quotes} consisting of consortiums of smallmore » companies, consultants, and service companies linked by electronic information systems. These virtual corporations will have the resources and expertise previously found only in major corporations.« less

  11. Implementation of a World Wide Web server for the oil and gas industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaylock, R.E.; Martin, F.D.; Emery, R.

    1996-10-01

    The Gas and Oil Technology Exchange and Communication Highway (GO-TECH) provides an electronic information system for the petroleum community for exchanging ideas, data, and technology. The PC-based system fosters communication and discussion by linking the oil and gas producers with resource centers, government agencies, consulting firms, service companies, national laboratories, academic research groups, and universities throughout the world. The oil and gas producers can access the GO-TECH World Wide Web (WWW) home page through modem links, as well as through the Internet. Future GO-TECH applications will include the establishment of virtual corporations consisting of consortia of small companies, consultants, andmore » service companies linked by electronic information systems. These virtual corporations will have the resources and expertise previously found only in major corporations.« less

  12. The EBI SRS server-new features.

    PubMed

    Zdobnov, Evgeny M; Lopez, Rodrigo; Apweiler, Rolf; Etzold, Thure

    2002-08-01

    Here we report on recent developments at the EBI SRS server (http://srs.ebi.ac.uk). SRS has become an integration system for both data retrieval and sequence analysis applications. The EBI SRS server is a primary gateway to major databases in the field of molecular biology produced and supported at EBI as well as European public access point to the MEDLINE database provided by US National Library of Medicine (NLM). It is a reference server for latest developments in data and application integration. The new additions include: concept of virtual databases, integration of XML databases like the Integrated Resource of Protein Domains and Functional Sites (InterPro), Gene Ontology (GO), MEDLINE, Metabolic pathways, etc., user friendly data representation in 'Nice views', SRSQuickSearch bookmarklets. SRS6 is a licensed product of LION Bioscience AG freely available for academics. The EBI SRS server (http://srs.ebi.ac.uk) is a free central resource for molecular biology data as well as a reference server for the latest developments in data integration.

  13. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel andmore » one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a group is similar to all other components as a group. However, some differences were observed. The Supermicro server used 27 percent more power at idle compared to the other brands. The Intel server had a power supply control feature called cold redundancy, and the data suggest that cold redundancy can provide energy savings at low power levels. Test and evaluation methods that might be used by others having limited resources for IT equipment evaluation are explained in the report.« less

  14. GRAMM-X public web server for protein–protein docking

    PubMed Central

    Tovchigrechko, Andrey; Vakser, Ilya A.

    2006-01-01

    Protein docking software GRAMM-X and its web interface () extend the original GRAMM Fast Fourier Transformation methodology by employing smoothed potentials, refinement stage, and knowledge-based scoring. The web server frees users from complex installation of database-dependent parallel software and maintaining large hardware resources needed for protein docking simulations. Docking problems submitted to GRAMM-X server are processed by a 320 processor Linux cluster. The server was extensively tested by benchmarking, several months of public use, and participation in the CAPRI server track. PMID:16845016

  15. The World-Wide Web and Mosaic: An Overview for Librarians.

    ERIC Educational Resources Information Center

    Morgan, Eric Lease

    1994-01-01

    Provides an overview of the Internet's World-Wide Web (Web), a hypertext system. Highlights include the client/server model; Uniform Resource Locator; examples of software; Web servers versus Gopher servers; HyperText Markup Language (HTML); converting files; Common Gateway Interface; organizing Web information; and the role of librarians in…

  16. Designing a scalable video-on-demand server with data sharing

    NASA Astrophysics Data System (ADS)

    Lim, Hyeran; Du, David H.

    2000-12-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  17. Designing a scalable video-on-demand server with data sharing

    NASA Astrophysics Data System (ADS)

    Lim, Hyeran; Du, David H. C.

    2001-01-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  18. Resource Management Scheme Based on Ubiquitous Data Analysis

    PubMed Central

    Lee, Heung Ki; Jung, Jaehee

    2014-01-01

    Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692

  19. Virtual reality for spherical images

    NASA Astrophysics Data System (ADS)

    Pilarczyk, Rafal; Skarbek, Władysław

    2017-08-01

    Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.

  20. Electronic Reference Library: Silverplatter's Database Networking Solution.

    ERIC Educational Resources Information Center

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essiari, Abdelilah; Mudumbai, Srilehka; Thompson, Mary

    Akenti is an authorization service for distributed resources. The authorization policy is kept in distributed certificates signed by one or more stakeholders for the resources. The package consists of the following components: Java GUI tools to create and sign the policy certificates C++ libraries to do make acess decisions based on the policy certificates A standalone authorization server that make access decisions C interfaces to the libraries and server

  2. Triple-server blind quantum computation using entanglement swapping

    NASA Astrophysics Data System (ADS)

    Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua

    2014-04-01

    Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.

  3. Realizing the Potential of Information Resources: Information, Technology, and Services. Track 3: Serving Clients with Client/Server.

    ERIC Educational Resources Information Center

    CAUSE, Boulder, CO.

    Eight papers are presented from the 1995 CAUSE conference track on client/server issues faced by managers of information technology at colleges and universities. The papers include: (1) "The Realities of Client/Server Development and Implementation" (Mary Ann Carr and Alan Hartwig), which examines Carnegie Mellon University's transition…

  4. Mobile technology in radiology resident education.

    PubMed

    Korbage, Aiham C; Bedi, Harprit S

    2012-06-01

    The authors hypothesized that ownership of a mobile electronic device would result in more time spent learning radiology. Current trends in radiology residents' studying habits, their use of electronic and printed radiology learning resources, and how much of the funds allotted to them are being used toward printed vs electronic education tools were assessed in this study. A survey study was conducted among radiology residents across the United States from June 13 to July 5, 2011. Program directors listed in the Association of Program Directors in Radiology e-mail list server received an e-mail asking for residents to participate in an online survey. The questionnaire consisted of 12 questions and assessed the type of institution, the levels of training of the respondents, and book funds allocated to residents. It also assessed the residents' study habits, access to portable devices, and use of printed and electronic radiology resources. Radiology residents are adopters of new technologies, with 74% owning smart phones and 37% owning tablet devices. Respondents spend nearly an equal amount of time learning radiology from printed textbooks as they do from electronic resources. Eighty-one percent of respondents believe that they would spend more time learning radiology if provided with tablet devices. There is considerable use of online and electronic resources and mobile devices among the current generation of radiology residents. Benefits, such as more study time, may be obtained by radiology programs that incorporate tablet devices into the education of their residents. Copyright © 2012 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  5. Biomedical information @ the speed of light: implementing desktop access to publishers' resources at the Paterson Institute for Cancer Research.

    PubMed

    Glover, S W

    2001-06-01

    Shortly after midnight every Thursday morning, a list server in Massachusetts delivers an electronic table of contents message to the Kostoris Medical Library at the Paterson Institute for Cancer Research in Manchester, UK. The messageins details of the latest edition of the New England Journal of Medicine, complete with hyperlinks to the full text of the content online. Publishers' electronic current awareness services have been integrated into the dissemination process of the Library service to enhance the speed of communication and access to full text content. As a means of promoting electronic journal use, a system of e-mail delivery coupled with fast Internet access has allowed a migration from paper-based current awareness alerting to a seamless online product.

  6. EnviroAtlas - Metrics for Austin, TX

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The layers in this web service depict ecosystem services at the census block group level for the community of Austin, Texas. These layers illustrate the ecosystems and natural resources that are associated with clean air (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_CleanAir/MapServer); clean and plentiful water (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_CleanPlentifulWater/MapServer); natural hazard mitigation (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_NaturalHazardMitigation/MapServer); climate stabilization (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_ClimateStabilization/MapServer); food, fuel, and materials (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_FoodFuelMaterials/MapServer); recreation, culture, and aesthetics (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_RecreationCultureAesthetics/MapServer); and biodiversity conservation (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_BiodiversityConservation/MapServer), and factors that place stress on those resources. EnviroAtlas allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the conterminous United States as well as de

  7. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    PubMed

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  8. Development of a Web-based Glaucoma Registry at King Khaled Eye Specialist Hospital, Saudi Arabia: A Cost-Effective Methodology

    PubMed Central

    Zaman, Babar; Khandekar, Rajiv; Al Shahwan, Sami; Song, Jonathan; Al Jadaan, Ibrahim; Al Jiasim, Leyla; Owaydha, Ohood; Asghar, Nasira; Hijazi, Amar; Edward, Deepak P.

    2014-01-01

    In this brief communication, we present the steps used to establish a web-based congenital glaucoma registry at our institution. The contents of a case report form (CRF) were developed by a group of glaucoma subspecialists. Information Technology (IT) specialists used Lime Survey softwareTM to create an electronic CRF. A MY Structured Query Language (MySQL) server was used as a database with a virtual machine operating system. Two ophthalmologists and 2 IT specialists worked for 7 hours, and a biostatistician and a data registrar worked for 24 hours each to establish the electronic CRF. Using the CRF which was transferred to the Lime survey tool, and the MYSQL server application, data could be directly stored in spreadsheet programs that included Microsoft Excel, SPSS, and R-Language and queried in real-time. In a pilot test, clinical data from 80 patients with congenital glaucoma were entered into the registry and successful descriptive analysis and data entry validation was performed. A web-based disease registry was established in a short period of time in a cost-efficient manner using available resources and a team-based approach. PMID:24791112

  9. Development of a web-based glaucoma registry at King Khaled Eye Specialist Hospital, Saudi Arabia: a cost-effective methodology.

    PubMed

    Zaman, Babar; Khandekar, Rajiv; Al Shahwan, Sami; Song, Jonathan; Al Jadaan, Ibrahim; Al Jiasim, Leyla; Owaydha, Ohood; Asghar, Nasira; Hijazi, Amar; Edward, Deepak P

    2014-01-01

    In this brief communication, we present the steps used to establish a web-based congenital glaucoma registry at our institution. The contents of a case report form (CRF) were developed by a group of glaucoma subspecialists. Information Technology (IT) specialists used Lime Survey softwareTM to create an electronic CRF. A MY Structured Query Language (MySQL) server was used as a database with a virtual machine operating system. Two ophthalmologists and 2 IT specialists worked for 7 hours, and a biostatistician and a data registrar worked for 24 hours each to establish the electronic CRF. Using the CRF which was transferred to the Lime survey tool, and the MYSQL server application, data could be directly stored in spreadsheet programs that included Microsoft Excel, SPSS, and R-Language and queried in real-time. In a pilot test, clinical data from 80 patients with congenital glaucoma were entered into the registry and successful descriptive analysis and data entry validation was performed. A web-based disease registry was established in a short period of time in a cost-efficient manner using available resources and a team-based approach.

  10. An approach to medical knowledge sharing in a hospital information system using MCLink.

    PubMed

    Shibuya, Akiko; Inoue, Ryusuke; Nakayama, Masaharu; Kasahara, Shin; Maeda, Yukihiro; Umesato, Yoshimasa; Kondo, Yoshiaki

    2013-08-01

    Clinicians often need access to electronic information resources that answer questions that occur in daily clinical practice. This information generally comes from publicly available resources. However, clinicians also need knowledge on institution-specific information (e.g., institution-specific guidelines, choice of drug, choice of laboratory test, information on adverse events, and advice from professional colleagues). This information needs to be available in real time. This study characterizes these needs in order to build a prototype hospital information system (HIS) that can help clinicians get timely answers to questions. We previously designed medical knowledge units called Medical Cells (MCs). We developed a portal server of MCs that can create and store medical information such as institution-specific information. We then developed a prototype HIS that embeds MCs as links (MCLink); these links are based on specific terms (e.g., drug, laboratory test, and disease). This prototype HIS presents clinicians with institution-specific information. The HIS clients (e.g., clinicians, nurses, pharmacists, and laboratory technicians) can also create an MCLink in the HIS using the portal server in the hospital. The prototype HIS allowed efficient sharing and use of institution-specific information to clinicians at the point of care. This study included institution-specific information resources and advice from professional colleagues, both of which might have an important role in supporting good clinical decision making.

  11. National Medical Terminology Server in Korea

    NASA Astrophysics Data System (ADS)

    Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee

    Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.

  12. Using Web Server Logs to Track Users through the Electronic Forest

    ERIC Educational Resources Information Center

    Coombs, Karen A.

    2005-01-01

    This article analyzes server logs, providing helpful information in making decisions about Web-based services. The author indicates, as a result of analyzing server logs, several interesting things about the users' behavior were learned. The resulting findings are discussed in this article. Certain pages of the author's Web site, for instance, are…

  13. 21 CFR 1300.03 - Definitions relating to electronic orders for controlled substances and electronic prescriptions...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... standard for biometric data specifications for personal identity verification. Operating point means a... records on its servers. Audit trail means a record showing who has accessed an information technology... information on a local server or hard drive. Certificate policy means a named set of rules that sets forth the...

  14. 21 CFR 1300.03 - Definitions relating to electronic orders for controlled substances and electronic prescriptions...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... standard for biometric data specifications for personal identity verification. Operating point means a... records on its servers. Audit trail means a record showing who has accessed an information technology... information on a local server or hard drive. Certificate policy means a named set of rules that sets forth the...

  15. 21 CFR 1300.03 - Definitions relating to electronic orders for controlled substances and electronic prescriptions...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... standard for biometric data specifications for personal identity verification. Operating point means a... records on its servers. Audit trail means a record showing who has accessed an information technology... information on a local server or hard drive. Certificate policy means a named set of rules that sets forth the...

  16. CentiServer: A Comprehensive Resource, Web-Based Application and R Package for Centrality Analysis.

    PubMed

    Jalili, Mahdi; Salehzadeh-Yazdi, Ali; Asgari, Yazdan; Arab, Seyed Shahriar; Yaghmaie, Marjan; Ghavamzadeh, Ardeshir; Alimoghaddam, Kamran

    2015-01-01

    Various disciplines are trying to solve one of the most noteworthy queries and broadly used concepts in biology, essentiality. Centrality is a primary index and a promising method for identifying essential nodes, particularly in biological networks. The newly created CentiServer is a comprehensive online resource that provides over 110 definitions of different centrality indices, their computational methods, and algorithms in the form of an encyclopedia. In addition, CentiServer allows users to calculate 55 centralities with the help of an interactive web-based application tool and provides a numerical result as a comma separated value (csv) file format or a mapped graphical format as a graph modeling language (GML) file. The standalone version of this application has been developed in the form of an R package. The web-based application (CentiServer) and R package (centiserve) are freely available at http://www.centiserver.org/.

  17. CentiServer: A Comprehensive Resource, Web-Based Application and R Package for Centrality Analysis

    PubMed Central

    Jalili, Mahdi; Salehzadeh-Yazdi, Ali; Asgari, Yazdan; Arab, Seyed Shahriar; Yaghmaie, Marjan; Ghavamzadeh, Ardeshir; Alimoghaddam, Kamran

    2015-01-01

    Various disciplines are trying to solve one of the most noteworthy queries and broadly used concepts in biology, essentiality. Centrality is a primary index and a promising method for identifying essential nodes, particularly in biological networks. The newly created CentiServer is a comprehensive online resource that provides over 110 definitions of different centrality indices, their computational methods, and algorithms in the form of an encyclopedia. In addition, CentiServer allows users to calculate 55 centralities with the help of an interactive web-based application tool and provides a numerical result as a comma separated value (csv) file format or a mapped graphical format as a graph modeling language (GML) file. The standalone version of this application has been developed in the form of an R package. The web-based application (CentiServer) and R package (centiserve) are freely available at http://www.centiserver.org/ PMID:26571275

  18. LDAP: a web server for lncRNA-disease association prediction.

    PubMed

    Lan, Wei; Li, Min; Zhao, Kaijie; Liu, Jin; Wu, Fang-Xiang; Pan, Yi; Wang, Jianxin

    2017-02-01

    Increasing evidences have demonstrated that long noncoding RNAs (lncRNAs) play important roles in many human diseases. Therefore, predicting novel lncRNA-disease associations would contribute to dissect the complex mechanisms of disease pathogenesis. Some computational methods have been developed to infer lncRNA-disease associations. However, most of these methods infer lncRNA-disease associations only based on single data resource. In this paper, we propose a new computational method to predict lncRNA-disease associations by integrating multiple biological data resources. Then, we implement this method as a web server for lncRNA-disease association prediction (LDAP). The input of the LDAP server is the lncRNA sequence. The LDAP predicts potential lncRNA-disease associations by using a bagging SVM classifier based on lncRNA similarity and disease similarity. The web server is available at http://bioinformatics.csu.edu.cn/ldap jxwang@mail.csu.edu.cn. Supplementary data are available at Bioinformatics online.

  19. GeoSciML and EarthResourceML Update, 2012

    NASA Astrophysics Data System (ADS)

    Richard, S. M.; Commissionthe Management; Application Inte, I.

    2012-12-01

    CGI Interoperability Working Group activities during 2012 include deployment of services using the GeoSciML-Portrayal schema, addition of new vocabularies to support properties added in version 3.0, improvements to server software for deploying services, introduction of EarthResourceML v.2 for mineral resources, and collaboration with the IUSS on a markup language for soils information. GeoSciML and EarthResourceML have been used as the basis for the INSPIRE Geology and Mineral Resources specifications respectively. GeoSciML-Portrayal is an OGC GML simple-feature application schema for presentation of geologic map unit, contact, and shear displacement structure (fault and ductile shear zone) descriptions in web map services. Use of standard vocabularies for geologic age and lithology enables map services using shared legends to achieve visual harmonization of maps provided by different services. New vocabularies have been added to the collection of CGI vocabularies provided to support interoperable GeoSciML services, and can be accessed through http://resource.geosciml.org. Concept URIs can be dereferenced to obtain SKOS rdf or html representations using the SISSVoc vocabulary service. New releases of the FOSS GeoServer application greatly improve support for complex XML feature schemas like GeoSciML, and the ArcGIS for INSPIRE extension implements similar complex feature support for ArcGIS Server. These improved server implementations greatly facilitate deploying GeoSciML services. EarthResourceML v2 adds features for information related to mining activities. SoilML provides an interchange format for soil material, soil profile, and terrain information. Work is underway to add GeoSciML to the portfolio of Open Geospatial Consortium (OGC) specifications.

  20. Recent improvements in the NASA technical report server

    NASA Technical Reports Server (NTRS)

    Maa, Ming-Hokng; Nelson, Michael L.

    1995-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web (WWW) report distribution service, has been modified to allow parallel database queries, significantly decreasing user access time by an average factor of 2.3, access from clients behind firewalls and/or proxies which truncate excessively long Uniform Resource Locators (URL's), access to non-Wide Area Information Server (WAIS) databases, and compatibility with the Z39-50.3 protocol.

  1. An Assessment of Information Literacy Instruction in Physics Curricula

    NASA Astrophysics Data System (ADS)

    Fosmire, Michael

    1999-10-01

    Although the information landscape in scientific and technical fields in general and physics in particular is becoming increasingly complicated, a survey of physics librarians found that information literacy instruction for undergraduate and graduate physics students is almost nonexistent. The rise of electronic (and often unrefereed) communication, through websites, electronic discussion lists, and eprint servers, has made the system of information dissemination even more complex. Despite this increased complexity, in physics curricula formal instruction on navigating and intelligently consuming information resources is minimal. The limited instruction that is done appears to be very pragmatic training on how to use specific resources and does not address issues of information literacy. Research literature indicates that students with immediate and concrete information needs (e.g., course assignments) are most receptive to information literacy instruction. Thus, faculty and librarians need to work together to co-ordinate instruction efforts in a way that is not currently being done, so students with information needs have the appropriate skills to fill those needs.

  2. 75 FR 70048 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-16

    ... the Exchange in order that they may locate their electronic servers in close physical proximity to the... execution systems through the same order gateway regardless of whether the sender is co-located in the... scheduled at least 1 day in advance. Rack and Stack Installation of one $200 per server. server in User's...

  3. User-Level QoS-Adaptive Resource Management in Server End-Systems

    DTIC Science & Technology

    2003-05-01

    an experimental evaluation on a microkernel indicate that it achieves end-system overload protection and traffic prioritization, improves insulation... microkernel operating system. The rest of this paper is organized as follows: Section 2 elaborates on the notion of QoS used in this paper. It proposes a...enforcing QoS contracts and their resource allocation. The communication server has been implemented on the Open Group microkernel Mk7.2. It exports a socket

  4. 16 CFR 803.10 - Running of time.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... effected to the server maintained by the FTC for the purpose of receiving electronic filings. (iii) For... server or the date on which delivery of the attachments is effected to the designated offices as provided...

  5. Remote gaming on resource-constrained devices

    NASA Astrophysics Data System (ADS)

    Reza, Waazim; Kalva, Hari; Kaufman, Richard

    2010-08-01

    Games have become important applications on mobile devices. A mobile gaming approach known as remote gaming is being developed to support games on low cost mobile devices. In the remote gaming approach, the responsibility of rendering a game and advancing the game play is put on remote servers instead of the resource constrained mobile devices. The games rendered on the servers are encoded as video and streamed to mobile devices. Mobile devices gather user input and stream the commands back to the servers to advance game play. With this solution, mobile devices with video playback and network connectivity can become game consoles. In this paper we present the design and development of such a system and evaluate the performance and design considerations to maximize the end user gaming experience.

  6. Construction and application of Red5 cluster based on OpenStack

    NASA Astrophysics Data System (ADS)

    Wang, Jiaqing; Song, Jianxin

    2017-08-01

    With the application and development of cloud computing technology in various fields, the resource utilization rate of the data center has been improved obviously, and the system based on cloud computing platform has also improved the expansibility and stability. In the traditional way, Red5 cluster resource utilization is low and the system stability is poor. This paper uses cloud computing to efficiently calculate the resource allocation ability, and builds a Red5 server cluster based on OpenStack. Multimedia applications can be published to the Red5 cloud server cluster. The system achieves the flexible construction of computing resources, but also greatly improves the stability of the cluster and service efficiency.

  7. Digital contract approach for consistent and predictable multimedia information delivery in electronic commerce

    NASA Astrophysics Data System (ADS)

    Konana, Prabhudev; Gupta, Alok; Whinston, Andrew B.

    1997-01-01

    A pure 'technological' solution to network quality problems is incomplete since any benefits from new technologies are offset by the demand from exponentially growing electronic commerce ad data-intensive applications. SInce an economic paradigm is implicit in electronic commerce, we propose a 'market-system' approach to improve quality of service. Quality of service for digital products takes on a different meaning since users view quality of service differently and value information differently. We propose a framework for electronic commerce that is based on an economic paradigm and mass-customization, and works as a wide-area distributed management system. In our framework, surrogate-servers act as intermediaries between information provides and end- users, and arrange for consistent and predictable information delivery through 'digital contracts.' These contracts are negotiated and priced based on economic principles. Surrogate servers pre-fetched, through replication, information from many different servers and consolidate based on demand expectations. In order to recognize users' requirements and process requests accordingly, real-time databases are central to our framework. We also propose that multimedia information be separated into slowly changing and rapidly changing data streams to improve response time requirements. Surrogate- servers perform the tasks of integration of these data streams that is transparent to end-users.

  8. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  9. The Journal of Earth System Science Education: Peer Review for Digital Earth and Digital Library Content

    NASA Astrophysics Data System (ADS)

    Johnson, D.; Ruzek, M.; Weatherley, J.

    2001-05-01

    The Journal of Earth System Science Education is a new interdisciplinary electronic journal aiming to foster the study of the Earth as a system and promote the development and exchange of interdisciplinary learning resources for formal and informal education. JESSE will serve educators and students by publishing and providing ready electronic access to Earth system and global change science learning resources for the classroom and will provide authors and creators with professional recognition through publication in a peer reviewed journal. JESSE resources foster a world perspective by emphasizing interdisciplinary studies and bridging disciplines in the context of the Earth system. The Journal will publish a wide ranging variety of electronic content, with minimal constraints on format, targeting undergraduate educators and students as the principal readership, expanding to a middle and high school audience as the journal matures. JESSE aims for rapid review and turn-around of resources to be published, with a goal of 12 weeks from submission to publication for resources requiring few changes. Initial publication will be on a quarterly basis until a flow of resource submissions is established to warrant continuous electronic publication. JESSE employs an open peer review process in which authors and reviewers discuss directly the acceptability of a resource for publication using a software tool called the Digital Document Discourse Environment. Reviewer comments and attribution will be available with the resource upon acceptance for publication. JESSE will also implement a moderated peer commentary capability where readers can comment on the use of a resource or make suggestions. In the development phase, JESSE will also conduct a parallel anonymous review of content to validate and ensure credibility of the open review approach. Copyright of materials submitted remains with the author, granting JESSE the non-exclusive right to maintain a copy of the resource published on the JESSE web server, ensuring long term access to the resource as reviewed. JESSE is collaborating with the Digital Library for Earth System Education (DLESE) as a federated partner. Initial release is planned for Summer, 2001.

  10. Flexible server architecture for resource-optimal presentation of Internet multimedia streams to the client

    NASA Astrophysics Data System (ADS)

    Boenisch, Holger; Froitzheim, Konrad

    1999-12-01

    The transfer of live media streams such as video and audio over the Internet is subject to several problems, static and dynamic by nature. Important quality of service (QoS) parameters do not only differ between various receivers depending on their network access, service provider, and nationality, the QoS is also variable in time. Moreover the installed receiver base is heterogeneous with respect to operating system, browser or client software, and browser version. We present a new concept for serving live media streams. It is not longer based on the current one-size-fits all paradigm, where the server offers just one stream. Our compresslet system takes the opposite approach: it builds media streams `to order' and `just in time'. Every client subscribing to a media stream uses a servlet loaded into the media server to generate a tailored data stream for his resources and constraints. The server is designed such that commonly used components for media streams are computed once. The compresslets use these prefabricated components, code additional data if necessary, and construct the data stream based on the dynamic available QoS and other client constraints. A client-specific encoding leads to resource- optimal presentation that is especially useful for the presentation of complex multimedia documents on a variety of output devices.

  11. Library Web Proxy Use Survey Results.

    ERIC Educational Resources Information Center

    Murray, Peter E.

    2001-01-01

    Outlines the use of proxy Web servers by libraries and reports on a survey on their use in libraries. Highlights include proxy use for remote resource access, for filtering, for bandwidth conservation, and for gathering statistics; privacy policies regarding the use of proxy server log files; and a copy of the survey. (LRW)

  12. Plotting a New Course for Metasearch

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2005-01-01

    Today's world demands an expansive search environment. The universe of information resources is immense and is growing rapidly. The content needed for research and scholarship is dispersed among publishers, aggregators, repositories, library catalogs, e-print servers, and servers throughout the Web. Users do not want to jump from one interface to…

  13. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  14. Automatic Thermal Infrared Panoramic Imaging Sensor

    DTIC Science & Technology

    2006-11-01

    hibernation, in which power supply to the server computer , the wireless network hardware, the GPS receiver, and the electronic compass / tilt sensor...prototype. At the operator’s command on the client laptop, the receiver wakeup device on the server side will switch on the ATX power supply at the...server, to resume the power supply to all the APTIS components. The embedded computer will resume all of the functions it was performing when put

  15. Glossary of Internet Terms.

    ERIC Educational Resources Information Center

    Microcomputers for Information Management, 1995

    1995-01-01

    Provides definitions for 71 terms related to the Internet, including Archie, bulletin board system, cyberspace, e-mail (electronic mail), file transfer protocol, gopher, hypertext, integrated services digital network, local area network, listserv, modem, packet switching, server, telnet, UNIX, WAIS (wide area information servers), and World Wide…

  16. Energy Efficiency in Small Server Rooms: Field Surveys and Findings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Iris; Greenberg, Steve; Mahdavi, Roozbeh

    Fifty-seven percent of US servers are housed in server closets, server rooms, and localized data centers, in what are commonly referred to as small server rooms, which comprise 99percent of all server spaces in the US. While many mid-tier and enterprise-class data centers are owned by large corporations that consider energy efficiency a goal to minimize business operating costs, small server rooms typically are not similarly motivated. They are characterized by decentralized ownership and management and come in many configurations, which creates a unique set of efficiency challenges. To develop energy efficiency strategies for these spaces, we surveyed 30 smallmore » server rooms across eight institutions, and selected four of them for detailed assessments. The four rooms had Power Usage Effectiveness (PUE) values ranging from 1.5 to 2.1. Energy saving opportunities ranged from no- to low-cost measures such as raising cooling set points and better airflow management, to more involved but cost-effective measures including server consolidation and virtualization, and dedicated cooling with economizers. We found that inefficiencies mainly resulted from organizational rather than technical issues. Because of the inherent space and resource limitations, the most effective measure is to operate servers through energy-efficient cloud-based services or well-managed larger data centers, rather than server rooms. Backup power requirement, and IT and cooling efficiency should be evaluated to minimize energy waste in the server space. Utility programs are instrumental in raising awareness and spreading technical knowledge on server operation, and the implementation of energy efficiency measures in small server rooms.« less

  17. SciServer: An Online Collaborative Environment for Big Data in Research and Education

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr

    2017-01-01

    For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast cross-matching between very large astronomical datasets.Demos, documentation, and more information about all these resources can be found at www.sciserver.org.

  18. ELM server: a new resource for investigating short functional sites in modular eukaryotic proteins

    PubMed Central

    Puntervoll, Pål; Linding, Rune; Gemünd, Christine; Chabanis-Davidson, Sophie; Mattingsdal, Morten; Cameron, Scott; Martin, David M. A.; Ausiello, Gabriele; Brannetti, Barbara; Costantini, Anna; Ferrè, Fabrizio; Maselli, Vincenza; Via, Allegra; Cesareni, Gianni; Diella, Francesca; Superti-Furga, Giulio; Wyrwicz, Lucjan; Ramu, Chenna; McGuigan, Caroline; Gudavalli, Rambabu; Letunic, Ivica; Bork, Peer; Rychlewski, Leszek; Küster, Bernhard; Helmer-Citterich, Manuela; Hunter, William N.; Aasland, Rein; Gibson, Toby J.

    2003-01-01

    Multidomain proteins predominate in eukaryotic proteomes. Individual functions assigned to different sequence segments combine to create a complex function for the whole protein. While on-line resources are available for revealing globular domains in sequences, there has hitherto been no comprehensive collection of small functional sites/motifs comparable to the globular domain resources, yet these are as important for the function of multidomain proteins. Short linear peptide motifs are used for cell compartment targeting, protein–protein interaction, regulation by phosphorylation, acetylation, glycosylation and a host of other post-translational modifications. ELM, the Eukaryotic Linear Motif server at http://elm.eu.org/, is a new bioinformatics resource for investigating candidate short non-globular functional motifs in eukaryotic proteins, aiming to fill the void in bioinformatics tools. Sequence comparisons with short motifs are difficult to evaluate because the usual significance assessments are inappropriate. Therefore the server is implemented with several logical filters to eliminate false positives. Current filters are for cell compartment, globular domain clash and taxonomic range. In favourable cases, the filters can reduce the number of retained matches by an order of magnitude or more. PMID:12824381

  19. File Server-Based CD-ROM Networking: Using SCSI Express.

    ERIC Educational Resources Information Center

    McQueen, Howard

    1992-01-01

    Provides guidelines for evaluating SCSI Express Novell 386, a new product allowing CD-ROM drives to be attached to a Netware 3.11 file server, increasing CD-ROM networking capability. Specific limitations concerning software, hardware, and human resources are outlined, as well as its unique features and potential for future networking uses. (EA)

  20. "Pack[superscript2]": VM Resource Scheduling for Fine-Grained Application SLAs in Highly Consolidated Environment

    ERIC Educational Resources Information Center

    Sukwong, Orathai

    2013-01-01

    Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…

  1. Server-Controlled Identity-Based Authenticated Key Exchange

    NASA Astrophysics Data System (ADS)

    Guo, Hua; Mu, Yi; Zhang, Xiyong; Li, Zhoujun

    We present a threshold identity-based authenticated key exchange protocol that can be applied to an authenticated server-controlled gateway-user key exchange. The objective is to allow a user and a gateway to establish a shared session key with the permission of the back-end servers, while the back-end servers cannot obtain any information about the established session key. Our protocol has potential applications in strong access control of confidential resources. In particular, our protocol possesses the semantic security and demonstrates several highly-desirable security properties such as key privacy and transparency. We prove the security of the protocol based on the Bilinear Diffie-Hellman assumption in the random oracle model.

  2. Autoplot and the HAPI Server

    NASA Astrophysics Data System (ADS)

    Faden, J.; Vandegriff, J. D.; Weigel, R. S.

    2016-12-01

    Autoplot was introduced in 2008 as an easy-to-use plotting tool for the space physics community. It reads data from a variety of file resources, such as CDF and HDF files, and a number of specialized data servers, such as the PDS/PPI's DIT-DOS, CDAWeb, and from the University of Iowa's RPWG Das2Server. Each of these servers have optimized methods for transmitting data to display in Autoplot, but require coordination and specialized software to work, limiting Autoplot's ability to access new servers and datasets. Likewise, groups who would like software to access their APIs must either write thier own clients, or publish a specification document in hopes that people will write clients. The HAPI specification was written so that a simple, standard API could be used by both Autoplot and server implementations, to remove these barriers to free flow of time series data. Autoplot's software for communicating with HAPI servers is presented, showing the user interface scientists will use, and how data servers might implement the HAPI specification to provide access to their data. This will also include instructions on how Autoplot is used and installed desktop computers, and used to view data from the RBSP, Juno, and other missions.

  3. Reliable Collection of Real-Time Patient Physiologic Data from less Reliable Networks: a "Monitor of Monitors" System (MoMs).

    PubMed

    Hu, Peter F; Yang, Shiming; Li, Hsiao-Chi; Stansbury, Lynn G; Yang, Fan; Hagegeorge, George; Miller, Catriona; Rock, Peter; Stein, Deborah M; Mackenzie, Colin F

    2017-01-01

    Research and practice based on automated electronic patient monitoring and data collection systems is significantly limited by system down time. We asked whether a triple-redundant Monitor of Monitors System (MoMs) to collect and summarize key information from system-wide data sources could achieve high fault tolerance, early diagnosis of system failure, and improve data collection rates. In our Level I trauma center, patient vital signs(VS) monitors were networked to collect real time patient physiologic data streams from 94 bed units in our various resuscitation, operating, and critical care units. To minimize the impact of server collection failure, three BedMaster® VS servers were used in parallel to collect data from all bed units. To locate and diagnose system failures, we summarized critical information from high throughput datastreams in real-time in a dashboard viewer and compared the before and post MoMs phases to evaluate data collection performance as availability time, active collection rates, and gap duration, occurrence, and categories. Single-server collection rates in the 3-month period before MoMs deployment ranged from 27.8 % to 40.5 % with combined 79.1 % collection rate. Reasons for gaps included collection server failure, software instability, individual bed setting inconsistency, and monitor servicing. In the 6-month post MoMs deployment period, average collection rates were 99.9 %. A triple redundant patient data collection system with real-time diagnostic information summarization and representation improved the reliability of massive clinical data collection to nearly 100 % in a Level I trauma center. Such data collection framework may also increase the automation level of hospital-wise information aggregation for optimal allocation of health care resources.

  4. Hybrid Rendering with Scheduling under Uncertainty

    PubMed Central

    Tamm, Georg; Krüger, Jens

    2014-01-01

    As scientific data of increasing size is generated by today’s simulations and measurements, utilizing dedicated server resources to process the visualization pipeline becomes necessary. In a purely server-based approach, requirements on the client-side are minimal as the client only displays results received from the server. However, the client may have a considerable amount of hardware available, which is left idle. Further, the visualization is put at the whim of possibly unreliable server and network conditions. Server load, bandwidth and latency may substantially affect the response time on the client. In this paper, we describe a hybrid method, where visualization workload is assigned to server and client. A capable client can produce images independently. The goal is to determine a workload schedule that enables a synergy between the two sides to provide rendering results to the user as fast as possible. The schedule is determined based on processing and transfer timings obtained at runtime. Our probabilistic scheduler adapts to changing conditions by shifting workload between server and client, and accounts for the performance variability in the dynamic system. PMID:25309115

  5. The designing and implementation of PE teaching information resource database based on broadband network

    NASA Astrophysics Data System (ADS)

    Wang, Jian

    2017-01-01

    In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.

  6. Multi-server blind quantum computation over collective-noise channels

    NASA Astrophysics Data System (ADS)

    Xiao, Min; Liu, Lin; Song, Xiuli

    2018-03-01

    Blind quantum computation (BQC) enables ordinary clients to securely outsource their computation task to costly quantum servers. Besides two essential properties, namely correctness and blindness, practical BQC protocols also should make clients as classical as possible and tolerate faults from nonideal quantum channel. In this paper, using logical Bell states as quantum resource, we propose multi-server BQC protocols over collective-dephasing noise channel and collective-rotation noise channel, respectively. The proposed protocols permit completely or almost classical client, meet the correctness and blindness requirements of BQC protocol, and are typically practical BQC protocols.

  7. An accurate and efficient method to predict the electronic excitation energies of BODIPY fluorescent dyes.

    PubMed

    Wang, Jia-Nan; Jin, Jun-Ling; Geng, Yun; Sun, Shi-Ling; Xu, Hong-Liang; Lu, Ying-Hua; Su, Zhong-Min

    2013-03-15

    Recently, the extreme learning machine neural network (ELMNN) as a valid computing method has been proposed to predict the nonlinear optical property successfully (Wang et al., J. Comput. Chem. 2012, 33, 231). In this work, first, we follow this line of work to predict the electronic excitation energies using the ELMNN method. Significantly, the root mean square deviation of the predicted electronic excitation energies of 90 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene (BODIPY) derivatives between the predicted and experimental values has been reduced to 0.13 eV. Second, four groups of molecule descriptors are considered when building the computing models. The results show that the quantum chemical descriptions have the closest intrinsic relation with the electronic excitation energy values. Finally, a user-friendly web server (EEEBPre: Prediction of electronic excitation energies for BODIPY dyes), which is freely accessible to public at the web site: http://202.198.129.218, has been built for prediction. This web server can return the predicted electronic excitation energy values of BODIPY dyes that are high consistent with the experimental values. We hope that this web server would be helpful to theoretical and experimental chemists in related research. Copyright © 2012 Wiley Periodicals, Inc.

  8. An Autonomous Mobile Agent-Based Distributed Learning Architecture: A Proposal and Analytical Analysis

    ERIC Educational Resources Information Center

    Ahmed, Iftikhar; Sadeq, Muhammad Jafar

    2006-01-01

    Current distance learning systems are increasingly packing highly data-intensive contents on servers, resulting in the congestion of network and server resources at peak service times. A distributed learning system based on faded information field (FIF) architecture that employs mobile agents (MAs) has been proposed and simulated in this work. The…

  9. Bridging the Gap between HL7 CDA and HL7 FHIR: A JSON Based Mapping.

    PubMed

    Rinner, Christoph; Duftschmid, Georg

    2016-01-01

    The Austrian electronic health record (EHR) system ELGA went live in December 2016. It is a document oriented EHR system and is based on the HL7 Clinical Document Architecture (CDA). The HL7 Fast Healthcare Interoperability Resources (FHIR) is a relatively new standard that combines the advantages of HL7 messages and CDA Documents. In order to offer easier access to information stored in ELGA we present a method based on adapted FHIR resources to map CDA documents to FHIR resources. A proof-of-concept tool using Java, the open-source FHIR framework HAPI-FHIR and publicly available FHIR servers was created to evaluate the presented mapping. In contrast to other approaches the close resemblance of the mapping file to the FHIR specification allows existing FHIR infrastructure to be reused. In order to reduce information overload and facilitate the access to CDA documents, FHIR could offer a standardized way to query CDA data on a fine granular base in Austria.

  10. Beyond Electronic Forms: E-Mail as an Institution-Wide Information Server.

    ERIC Educational Resources Information Center

    Jacobson, Carl

    1992-01-01

    The University of Delaware developed an intelligent mail server to provide easy, inexpensive access to institutional information for faculty, staff, and students on any node, machine, or operating system on the campuswide computing network. Security concerns have been addressed. The small investment has returned immediate benefits. (MSE)

  11. Electronic Transfer of School Records.

    ERIC Educational Resources Information Center

    Yeagley, Raymond

    2001-01-01

    Describes the electronic transfer of student records, notably the use of a Web-server named CHARLOTTE sponsored by the National Forum on Education Statistics and an Electronic Data Exchange system named SPEEDE/ExPRESS. (PKP)

  12. System and Method for Providing a Climate Data Persistence Service

    NASA Technical Reports Server (NTRS)

    Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)

    2018-01-01

    A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.

  13. Characteristics and evolution of the ecosystem of software tools supporting research in molecular biology.

    PubMed

    Pazos, Florencio; Chagoyen, Monica

    2018-01-16

    Daily work in molecular biology presently depends on a large number of computational tools. An in-depth, large-scale study of that 'ecosystem' of Web tools, its characteristics, interconnectivity, patterns of usage/citation, temporal evolution and rate of decay is crucial for understanding the forces that shape it and for informing initiatives aimed at its funding, long-term maintenance and improvement. In particular, the long-term maintenance of these tools is compromised because of their specific development model. Hundreds of published studies become irreproducible de facto, as the software tools used to conduct them become unavailable. In this study, we present a large-scale survey of >5400 publications describing Web servers within the two main bibliographic resources for disseminating new software developments in molecular biology. For all these servers, we studied their citation patterns, the subjects they address, their citation networks and the temporal evolution of these factors. We also analysed how these factors affect the availability of these servers (whether they are alive). Our results show that this ecosystem of tools is highly interconnected and adapts to the 'trendy' subjects in every moment. The servers present characteristic temporal patterns of citation/usage, and there is a worrying rate of server 'death', which is influenced by factors such as the server popularity and the institutions that hosts it. These results can inform initiatives aimed at the long-term maintenance of these resources. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Assessment of feasibility of running RSNA's MIRC on a Raspberry Pi: a cost-effective solution for teaching files in radiology.

    PubMed

    Pereira, Andre; Atri, Mostafa; Rogalla, Patrik; Huynh, Thien; O'Malley, Martin E

    2015-11-01

    The value of a teaching case repository in radiology training programs is immense. The allocation of resources for putting one together is a complex issue, given the factors that have to be coordinated: hardware, software, infrastructure, administration, and ethics. Costs may be significant and cost-effective solutions are desirable. We chose Medical Imaging Resource Center (MIRC) to build our teaching file. It is offered by RSNA for free. For the hardware, we chose the Raspberry Pi, developed by the Raspberry Foundation: a small control board developed as a low cost computer for schools also used in alternative projects such as robotics and environmental data collection. Its performance and reliability as a file server were unknown to us. For the operational system, we chose Raspbian, a variant of Debian Linux, along with Apache (web server), MySql (database server) and PHP, which enhance the functionality of the server. A USB hub and an external hard drive completed the setup. Installation of software was smooth. The Raspberry Pi was able to handle very well the task of hosting the teaching file repository for our division. Uptime was logged at 100 %, and loading times were similar to other MIRC sites available online. We setup two servers (one for backup), each costing just below $200.00 including external storage and USB hub. It is feasible to run RSNA's MIRC off a low-cost control board (Raspberry Pi). Performance and reliability are comparable to full-size servers for the intended purpose of hosting a teaching file within an intranet environment.

  15. Navigating 3D electron microscopy maps with EM-SURFER.

    PubMed

    Esquivel-Rodríguez, Juan; Xiong, Yi; Han, Xusi; Guang, Shuomeng; Christoffer, Charles; Kihara, Daisuke

    2015-05-30

    The Electron Microscopy DataBank (EMDB) is growing rapidly, accumulating biological structural data obtained mainly by electron microscopy and tomography, which are emerging techniques for determining large biomolecular complex and subcellular structures. Together with the Protein Data Bank (PDB), EMDB is becoming a fundamental resource of the tertiary structures of biological macromolecules. To take full advantage of this indispensable resource, the ability to search the database by structural similarity is essential. However, unlike high-resolution structures stored in PDB, methods for comparing low-resolution electron microscopy (EM) density maps in EMDB are not well established. We developed a computational method for efficiently searching low-resolution EM maps. The method uses a compact fingerprint representation of EM maps based on the 3D Zernike descriptor, which is derived from a mathematical series expansion for EM maps that are considered as 3D functions. The method is implemented in a web server named EM-SURFER, which allows users to search against the entire EMDB in real-time. EM-SURFER compares the global shapes of EM maps. Examples of search results from different types of query structures are discussed. We developed EM-SURFER, which retrieves structurally relevant matches for query EM maps from EMDB within seconds. The unique capability of EM-SURFER to detect 3D shape similarity of low-resolution EM maps should prove invaluable in structural biology.

  16. Methodology to model the energy and greenhouse gas emissions of electronic software distributions.

    PubMed

    Williams, Daniel R; Tang, Yinshan

    2012-01-17

    A new electronic software distribution (ESD) life cycle analysis (LCA) methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative, physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO(2)e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO(2)e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.

  17. Integration of multiple DICOM Web servers into an enterprise-wide Web-based electronic medical record

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Langer, Steven G.; Martin, Kelly P.

    1999-07-01

    The purpose of this paper is to integrate multiple DICOM image webservers into the currently existing enterprises- wide web-browsable electronic medical record. Over the last six years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A character cell-based view of this data called the Mini Medical Record (MMR) has been available for four years, MINDscape, unlike the text-based MMR. provides a platform independent, dynamic, web browser view of the MIND database that can be easily linked with medical knowledge resources on the network, like PubMed and the Federated Drug Reference. There are over 10,000 MINDscape user accounts at the University of Washington Academic Medical Centers. The weekday average number of hits to MINDscape is 35,302 and weekday average number of individual users is 1252. DICOM images from multiple webservers are now being viewed through the MINDscape electronic medical record.

  18. A radiology department intranet: development and applications.

    PubMed

    Willing, S J; Berland, L L

    1999-01-01

    An intranet is a "private Internet" that uses the protocols of the World Wide Web to share information resources within a company or with the company's business partners and clients. The hardware requirements for an intranet begin with a dedicated Web server permanently connected to the departmental network. The heart of a Web server is the hypertext transfer protocol (HTTP) service, which receives a page request from a client's browser and transmits the page back to the client. Although knowledge of hypertext markup language (HTML) is not essential for authoring a Web page, a working familiarity with HTML is useful, as is knowledge of programming and database management. Security can be ensured by using scripts to write information in hidden fields or by means of "cookies." Interfacing databases and database management systems with the Web server and conforming the user interface to HTML syntax can be achieved by means of the common gateway interface (CGI), Active Server Pages (ASP), or other methods. An intranet in a radiology department could include the following types of content: on-call schedules, work schedules and a calendar, a personnel directory, resident resources, memorandums and discussion groups, software for a radiology information system, and databases.

  19. Quality of service policy control in virtual private networks

    NASA Astrophysics Data System (ADS)

    Yu, Yiqing; Wang, Hongbin; Zhou, Zhi; Zhou, Dongru

    2004-04-01

    This paper studies the QoS of VPN in an environment where the public network prices connection-oriented services based on source, destination and grade of service, and advertises these prices to its VPN customers (users). As different QoS technologies can produce different QoS, there are according different traffic classification rules and priority rules. The internet service provider (ISP) may need to build complex mechanisms separately for each node. In order to reduce the burden of network configuration, we need to design policy control technologies. We considers mainly directory server, policy server, policy manager and policy enforcers. Policy decision point (PDP) decide its control according to policy rules. In network, policy enforce point (PEP) decide its network controlled unit. For InterServ and DiffServ, we will adopt different policy control methods as following: (1) In InterServ, traffic uses resource reservation protocol (RSVP) to guarantee the network resource. (2) In DiffServ, policy server controls the DiffServ code points and per hop behavior (PHB), its PDP distributes information to each network node. Policy server will function as following: information searching; decision mechanism; decision delivering; auto-configuration. In order to prove the effectiveness of QoS policy control, we make the corrective simulation.

  20. Optimal Self-Tuning PID Controller Based on Low Power Consumption for a Server Fan Cooling System.

    PubMed

    Lee, Chengming; Chen, Rongshun

    2015-05-20

    Recently, saving the cooling power in servers by controlling the fan speed has attracted considerable attention because of the increasing demand for high-density servers. This paper presents an optimal self-tuning proportional-integral-derivative (PID) controller, combining a PID neural network (PIDNN) with fan-power-based optimization in the transient-state temperature response in the time domain, for a server fan cooling system. Because the thermal model of the cooling system is nonlinear and complex, a server mockup system simulating a 1U rack server was constructed and a fan power model was created using a third-order nonlinear curve fit to determine the cooling power consumption by the fan speed control. PIDNN with a time domain criterion is used to tune all online and optimized PID gains. The proposed controller was validated through experiments of step response when the server operated from the low to high power state. The results show that up to 14% of a server's fan cooling power can be saved if the fan control permits a slight temperature response overshoot in the electronic components, which may provide a time-saving strategy for tuning the PID controller to control the server fan speed during low fan power consumption.

  1. 77 FR 40914 - In the Matter of Indiana Michigan Power Company, D. C. Cook Nuclear Power Plant; Confirmatory...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-11

    ... Company met in an ADR session mediated by a professional mediator, arranged through Cornell University's... counsel or representative) to digitally sign documents and access the E-Submittal server for any... requirements for accessing the E-Submittal server are detailed in the NRC's ``Guidance for Electronic...

  2. CDC WONDER: a cooperative processing architecture for public health.

    PubMed Central

    Friede, A; Rosen, D H; Reid, J A

    1994-01-01

    CDC WONDER is an information management architecture designed for public health. It provides access to information and communications without the user's needing to know the location of data or communication pathways and mechanisms. CDC WONDER users have access to extractions from some 40 databases; electronic mail (e-mail); and surveillance data processing. System components include the Remote Client, the Communications Server, the Queue Managers, and Data Servers and Process Servers. The Remote Client software resides in the user's machine; other components are at the Centers for Disease Control and Prevention (CDC). The Remote Client, the Communications Server, and the Applications Server provide access to the information and functions in the Data Servers and Process Servers. The system architecture is based on cooperative processing, and components are coupled via pure message passing, using several protocols. This architecture allows flexibility in the choice of hardware and software. One system limitation is that final results from some subsystems are obtained slowly. Although designed for public health, CDC WONDER could be useful for other disciplines that need flexible, integrated information exchange. PMID:7719813

  3. Image-based electronic patient records for secured collaborative medical applications.

    PubMed

    Zhang, Jianguo; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Yao, Yihong; Cai, Weihua; Jin, Jin; Zhang, Guozhen; Sun, Kun

    2005-01-01

    We developed a Web-based system to interactively display image-based electronic patient records (EPR) for secured intranet and Internet collaborative medical applications. The system consists of four major components: EPR DICOM gateway (EPR-GW), Image-based EPR repository server (EPR-Server), Web Server and EPR DICOM viewer (EPR-Viewer). In the EPR-GW and EPR-Viewer, the security modules of Digital Signature and Authentication are integrated to perform the security processing on the EPR data with integrity and authenticity. The privacy of EPR in data communication and exchanging is provided by SSL/TLS-based secure communication. This presentation gave a new approach to create and manage image-based EPR from actual patient records, and also presented a way to use Web technology and DICOM standard to build an open architecture for collaborative medical applications.

  4. Net widens for funding of arXiv preprint server

    NASA Astrophysics Data System (ADS)

    Cartwright, Jon

    2010-03-01

    Librarians at Cornell University in the US want more external funding to support their popular arXiv preprint server because the running costs are now "beyond a single institution's resources". The open-access server, which received more than 60 000 new submissions last year and has about 400 000 registered users, is expanding so fast that the budget for staff and running costs is expected to rise by a fifth to 500 000 by 2012. Although most of the top 25 institutional users have already made financial commitments, Cornell now wants several hundred others to pledge support too.

  5. Lessons learned from a pilot implementation of the UMLS information sources map.

    PubMed

    Miller, P L; Frawley, S J; Wright, L; Roderer, N K; Powsner, S M

    1995-01-01

    To explore the software design issues involved in implementing an operational information sources map (ISM) knowledge base (KB) and system of navigational tools that can help medical users access network-based information sources relevant to a biomedical question. A pilot biomedical ISM KB and associated client-server software (ISM/Explorer) have been developed to help students, clinicians, researchers, and staff access network-based information sources, as part of the National Library of Medicine's (NLM) multi-institutional Unified Medical Language System (UMLS) project. The system allows the user to specify and constrain a search for a biomedical question of interest. The system then returns a list of sources matching the search. At this point the user may request 1) further information about a source, 2) that the list of sources be regrouped by different criteria to allow the user to get a better overall appreciation of the set of retrieved sources as a whole, or 3) automatic connection to a source. The pilot system operates in client-server mode and currently contains coded information for 121 sources. It is in routine use from approximately 40 workstations at the Yale School of Medicine. The lessons that have been learned are that: 1) it is important to make access to different versions of a source as seamless as possible, 2) achieving seamless, cross-platform access to heterogeneous sources is difficult, 3) significant differences exist between coding the subject content of an electronic information resource versus that of an article or a book, 4) customizing the ISM to multiple institutions entails significant complexities, and 5) there are many design trade-offs between specifying searches and viewing sets of retrieved sources that must be taken into consideration. An ISM KB and navigational tools have been constructed. In the process, much has been learned about the complexities of development and evaluation in this new environment, which are different from those for Gopher, wide area information servers (WAIS), World-Wide-Web (WWW), and MOSAIC resources.

  6. Migration of the Japanese healthcare enterprise from a financial to integrated management: strategy and architecture.

    PubMed

    Akiyama, M

    2001-01-01

    The Hospital Information System (HIS) has been positioned as the hub of the healthcare information management architecture. In Japan, the billing system assigns an "insurance disease names" to performed exams based on the diagnosis type. Departmental systems provide localized, departmental services, such as order receipt and diagnostic reporting, but do not provide patient demographic information. The system above has many problems. The departmental system's terminals and the HIS's terminals are not integrated. Duplicate data entry introduces errors and increases workloads. Order and exam data managed by the HIS can be sent to the billing system, but departmental data cannot usually be entered. Additionally, billing systems usually keep departmental data for only a short time before it is deleted. The billing system provides payment based on what is entered. The billing system is oriented towards diagnoses. Most importantly, the system is geared towards generating billing reports rather than at providing high-quality patient care. The role of the application server is that of a mediator between system components. Data and events generated by system components are sent to the application server that routes them to appropriate destinations. It also records all system events, including state changes to clinical data, access of clinical data and so on. Finally, the Resource Management System identifies all system resources available to the enterprise. The departmental systems are responsible for managing data and clinical processes at a departmental level. The client interacts with the system via the application server, which provides a general set of system-level functions. The system is implemented using current technologies CORBA and HTTP. System data is collected by the application server and assembled into XML documents for delivery to clients. Clients can access these URLs using standard HTTP clients, since each department provides an HTTP compliant web-server. We have implemented an integrated system communicating via CORBA middleware, consisting of an application server, endoscopy departmental server, pathology departmental server and wrappered legacy HIS. We have found this new approach solves the problems outlined earlier. It provides the services needed to ensure that data is never lost and is always available, that events that occur in the hospital are always captured, and that resources are managed and tracked effectively. Finally, it reduces costs, raises efficiency, increases the quality of patient care, and ultimately saves lives. Now, we are going to integrate all remaining hospital departments, and ultimately, all hospital functions.

  7. The Most Popular Astronomical Web Server in China

    NASA Astrophysics Data System (ADS)

    Cui, Chenzhou; Zhao, Yongheng

    Affected by the consistent depressibility of IT economy free homepage space is becoming less and less. It is more and more difficult to construct websites for amateur astronomers who do not have ability to pay for commercial space. In last May with the support of Chinese National Astronomical Observatory and Large Sky Area Multi-Object Fiber Spectroscopic Telescope project we setup a special web server (amateur.lamost.org) to provide free huge stable and no-advertisement homepage space to Chinese amateur astronomers and non-professional organizations. After only one year there has been more than 80 websites hosted on the server. More than 10000 visitors from nearly 40 countries visit the server and the amount of data downloaded by them exceeds 4 Giga-Bytes per day. The server has become the most popular amateur astronomical web server in China. It stores the most abundant Chinese amateur astronomical resources. Because of the extremely success our service has been drawing tremendous attentions from related institutions. Recently Chinese National Natural Science Foundation shows great interest to support the service. In the paper the emergence of the thought construction of the server and its present utilization and our future plan are introduced

  8. 21 CFR 1305.27 - Preservation of electronic orders.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 9 2013-04-01 2013-04-01 false Preservation of electronic orders. 1305.27 Section... II CONTROLLED SUBSTANCES Electronic Orders § 1305.27 Preservation of electronic orders. (a) A... two years. (c) If electronic order records are maintained on a central server, the records must be...

  9. 21 CFR 1305.27 - Preservation of electronic orders.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 9 2012-04-01 2012-04-01 false Preservation of electronic orders. 1305.27 Section... II CONTROLLED SUBSTANCES Electronic Orders § 1305.27 Preservation of electronic orders. (a) A... two years. (c) If electronic order records are maintained on a central server, the records must be...

  10. 21 CFR 1305.27 - Preservation of electronic orders.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 9 2011-04-01 2011-04-01 false Preservation of electronic orders. 1305.27 Section... II CONTROLLED SUBSTANCES Electronic Orders § 1305.27 Preservation of electronic orders. (a) A... two years. (c) If electronic order records are maintained on a central server, the records must be...

  11. 21 CFR 1305.27 - Preservation of electronic orders.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 9 2014-04-01 2014-04-01 false Preservation of electronic orders. 1305.27 Section... II CONTROLLED SUBSTANCES Electronic Orders § 1305.27 Preservation of electronic orders. (a) A... two years. (c) If electronic order records are maintained on a central server, the records must be...

  12. 21 CFR 1305.27 - Preservation of electronic orders.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Preservation of electronic orders. 1305.27 Section... II CONTROLLED SUBSTANCES Electronic Orders § 1305.27 Preservation of electronic orders. (a) A... two years. (c) If electronic order records are maintained on a central server, the records must be...

  13. The OptIPuter microscopy demonstrator: enabling science through a transatlantic lightpath

    PubMed Central

    Ellisman, M.; Hutton, T.; Kirkland, A.; Lin, A.; Lin, C.; Molina, T.; Peltier, S.; Singh, R.; Tang, K.; Trefethen, A.E.; Wallom, D.C.H.; Xiong, X.

    2009-01-01

    The OptIPuter microscopy demonstrator project has been designed to enable concurrent and remote usage of world-class electron microscopes located in Oxford and San Diego. The project has constructed a network consisting of microscopes and computational and data resources that are all connected by a dedicated network infrastructure using the UK Lightpath and US Starlight systems. Key science drivers include examples from both materials and biological science. The resulting system is now a permanent link between the Oxford and San Diego microscopy centres. This will form the basis of further projects between the sites and expansion of the types of systems that can be remotely controlled, including optical, as well as electron, microscopy. Other improvements will include the updating of the Microsoft cluster software to the high performance computing (HPC) server 2008, which includes the HPC basic profile implementation that will enable the development of interoperable clients. PMID:19487201

  14. The OptIPuter microscopy demonstrator: enabling science through a transatlantic lightpath.

    PubMed

    Ellisman, M; Hutton, T; Kirkland, A; Lin, A; Lin, C; Molina, T; Peltier, S; Singh, R; Tang, K; Trefethen, A E; Wallom, D C H; Xiong, X

    2009-07-13

    The OptIPuter microscopy demonstrator project has been designed to enable concurrent and remote usage of world-class electron microscopes located in Oxford and San Diego. The project has constructed a network consisting of microscopes and computational and data resources that are all connected by a dedicated network infrastructure using the UK Lightpath and US Starlight systems. Key science drivers include examples from both materials and biological science. The resulting system is now a permanent link between the Oxford and San Diego microscopy centres. This will form the basis of further projects between the sites and expansion of the types of systems that can be remotely controlled, including optical, as well as electron, microscopy. Other improvements will include the updating of the Microsoft cluster software to the high performance computing (HPC) server 2008, which includes the HPC basic profile implementation that will enable the development of interoperable clients.

  15. Computational resources for ribosome profiling: from database to Web server and software.

    PubMed

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Earth Science Data Grid System

    NASA Astrophysics Data System (ADS)

    Chi, Y.; Yang, R.; Kafatos, M.

    2004-05-01

    The Earth Science Data Grid System (ESDGS) is a software system in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We also develop the earth science application metadata; geospatial, temporal, and content-based indexing; and some other tools. In this paper, we will describe software architecture and components of the data grid system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.

  17. SEED Servers: High-Performance Access to the SEED Genomes, Annotations, and Metabolic Models

    PubMed Central

    Aziz, Ramy K.; Devoid, Scott; Disz, Terrence; Edwards, Robert A.; Henry, Christopher S.; Olsen, Gary J.; Olson, Robert; Overbeek, Ross; Parrello, Bruce; Pusch, Gordon D.; Stevens, Rick L.; Vonstein, Veronika; Xia, Fangfang

    2012-01-01

    The remarkable advance in sequencing technology and the rising interest in medical and environmental microbiology, biotechnology, and synthetic biology resulted in a deluge of published microbial genomes. Yet, genome annotation, comparison, and modeling remain a major bottleneck to the translation of sequence information into biological knowledge, hence computational analysis tools are continuously being developed for rapid genome annotation and interpretation. Among the earliest, most comprehensive resources for prokaryotic genome analysis, the SEED project, initiated in 2003 as an integration of genomic data and analysis tools, now contains >5,000 complete genomes, a constantly updated set of curated annotations embodied in a large and growing collection of encoded subsystems, a derived set of protein families, and hundreds of genome-scale metabolic models. Until recently, however, maintaining current copies of the SEED code and data at remote locations has been a pressing issue. To allow high-performance remote access to the SEED database, we developed the SEED Servers (http://www.theseed.org/servers): four network-based servers intended to expose the data in the underlying relational database, support basic annotation services, offer programmatic access to the capabilities of the RAST annotation server, and provide access to a growing collection of metabolic models that support flux balance analysis. The SEED servers offer open access to regularly updated data, the ability to annotate prokaryotic genomes, the ability to create metabolic reconstructions and detailed models of metabolism, and access to hundreds of existing metabolic models. This work offers and supports a framework upon which other groups can build independent research efforts. Large integrations of genomic data represent one of the major intellectual resources driving research in biology, and programmatic access to the SEED data will provide significant utility to a broad collection of potential users. PMID:23110173

  18. 78 FR 77750 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ... electronic servers in close physical proximity to the Exchange's trading and execution system. See id. at 59299. Partial Cabinets A User is able to request a physical cabinet to house its servers and other... Exchange enter the Exchange's trading and execution systems through the same order gateway, regardless of...

  19. 75 FR 69722 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-15

    ... premises controlled by the Exchange in order that they may locate their electronic servers in close... the Exchange's trading and execution systems through the same order gateway regardless of whether the... weekends if NOT scheduled at least 1 day in advance. Rack and Stack Installation of one $200 per server...

  20. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services

    PubMed Central

    Gessler, Damian DG; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-01-01

    Background SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. Results There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at , developer tools at , and a portal to third-party ontologies at (a "swap meet"). Conclusion SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs. PMID:19775460

  1. BioBarcode: a general DNA barcoding database and server platform for Asian biodiversity resources.

    PubMed

    Lim, Jeongheui; Kim, Sang-Yoon; Kim, Sungmin; Eo, Hae-Seok; Kim, Chang-Bae; Paek, Woon Kee; Kim, Won; Bhak, Jong

    2009-12-03

    DNA barcoding provides a rapid, accurate, and standardized method for species-level identification using short DNA sequences. Such a standardized identification method is useful for mapping all the species on Earth, particularly when DNA sequencing technology is cheaply available. There are many nations in Asia with many biodiversity resources that need to be mapped and registered in databases. We have built a general DNA barcode data processing system, BioBarcode, with open source software - which is a general purpose database and server. It uses mySQL RDBMS 5.0, BLAST2, and Apache httpd server. An exemplary database of BioBarcode has around 11,300 specimen entries (including GenBank data) and registers the biological species to map their genetic relationships. The BioBarcode database contains a chromatogram viewer which improves the performance in DNA sequence analyses. Asia has a very high degree of biodiversity and the BioBarcode database server system aims to provide an efficient bioinformatics protocol that can be freely used by Asian researchers and research organizations interested in DNA barcoding. The BioBarcode promotes the rapid acquisition of biological species DNA sequence data that meet global standards by providing specialized services, and provides useful tools that will make barcoding cheaper and faster in the biodiversity community such as standardization, depository, management, and analysis of DNA barcode data. The system can be downloaded upon request, and an exemplary server has been constructed with which to build an Asian biodiversity system http://www.asianbarcode.org.

  2. Implementation experience of a patient monitoring solution based on end-to-end standards.

    PubMed

    Martinez, I; Fernandez, J; Galarraga, M; Serrano, L; de Toledo, P; Escayola, J; Jimenez-Fernandez, S; Led, S; Martinez-Espronceda, M; Garcia, J

    2007-01-01

    This paper presents a proof-of-concept design of a patient monitoring solution for Intensive Care Unit (ICU). It is end-to-end standards-based, using ISO/IEEE 11073 (X73) in the bedside environment and EN13606 to communicate the information to an Electronic Healthcare Record (EHR) server. At the bedside end a plug-and-play sensor network is implemented, which communicates with a gateway that collects the medical information and sends it to a monitoring server. At this point the server transforms the data frame into an EN13606 extract, to be stored on the EHR server. The presented system has been tested in a laboratory environment to demonstrate the feasibility of this end-to-end standards-based solution.

  3. Electronic Mail for Personal Computers: Development Issues.

    ERIC Educational Resources Information Center

    Tomer, Christinger

    1994-01-01

    Examines competing, commercially developed electronic mail programs and how these technologies will affect the functionality and quality of electronic mail. How new standards for client-server mail systems are likely to enhance messaging capabilities and the use of electronic mail for information retrieval are considered. (Contains eight…

  4. iELM—a web server to explore short linear motif-mediated interactions

    PubMed Central

    Weatheritt, Robert J.; Jehl, Peter; Dinkel, Holger; Gibson, Toby J.

    2012-01-01

    The recent expansion in our knowledge of protein–protein interactions (PPIs) has allowed the annotation and prediction of hundreds of thousands of interactions. However, the function of many of these interactions remains elusive. The interactions of Eukaryotic Linear Motif (iELM) web server provides a resource for predicting the function and positional interface for a subset of interactions mediated by short linear motifs (SLiMs). The iELM prediction algorithm is based on the annotated SLiM classes from the Eukaryotic Linear Motif (ELM) resource and allows users to explore both annotated and user-generated PPI networks for SLiM-mediated interactions. By incorporating the annotated information from the ELM resource, iELM provides functional details of PPIs. This can be used in proteomic analysis, for example, to infer whether an interaction promotes complex formation or degradation. Furthermore, details of the molecular interface of the SLiM-mediated interactions are also predicted. This information is displayed in a fully searchable table, as well as graphically with the modular architecture of the participating proteins extracted from the UniProt and Phospho.ELM resources. A network figure is also presented to aid the interpretation of results. The iELM server supports single protein queries as well as large-scale proteomic submissions and is freely available at http://i.elm.eu.org. PMID:22638578

  5. LigSearch: a knowledge-based web server to identify likely ligands for a protein target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, Tjaart A. P. de; Laskowski, Roman A.; Duban, Mark-Eugene

    LigSearch is a web server for identifying ligands likely to bind to a given protein. Identifying which ligands might bind to a protein before crystallization trials could provide a significant saving in time and resources. LigSearch, a web server aimed at predicting ligands that might bind to and stabilize a given protein, has been developed. Using a protein sequence and/or structure, the system searches against a variety of databases, combining available knowledge, and provides a clustered and ranked output of possible ligands. LigSearch can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/LigSearch.

  6. A first evaluation of a pedagogical network for medical students at the University Hospital of Rennes.

    PubMed

    Fresnel, A; Jarno, P; Burgun, A; Delamarre, D; Denier, P; Cleret, M; Courtin, C; Seka, L P; Pouliquen, B; Cléran, L; Riou, C; Leduff, F; Lesaux, H; Duvauferrier, R; Le Beux, P

    1998-01-01

    A pedagogical network has been developed at University Hospital of Rennes from 1996. The challenge is to give medical information and informatics tools to all medical students in the clinical wards of the University Hospital. At first, nine wards were connected to the medical school server which is linked to the Internet. Client software electronic mail and WWW Netscape on Macintosh computers. Sever software is set up on Unix SUN providing a local homepage with selected pedagogical resources. These documents are stored in a DBMS database ORACLE and queries can be provided by specialty, authors or disease. The students can access a set of interactive teaching programs or electronic textbooks and can explore the Internet through the library information system and search engines. The teachers can send URL and indexation of pedagogical documents and can produce clinical cases: the database updating will be done by the users. This experience of using Web tools generated enthusiasm when we first introduced it to students. The evaluation shows that if the students can use this training early on, they will adapt the resources of the Internet to their own needs.

  7. Teaching a laboratory-intensive online introductory electronics course*

    NASA Astrophysics Data System (ADS)

    Markes, Mark

    2008-03-01

    Most current online courses provide little or no hands-on laboratory content. This talk will describe the development and initial experiences with presenting an introductory online electronics course with significant hands-on laboratory content. The course is delivered using a Linux-based Apache web server, a Darwin Streaming Server, a SMART Board interactive white board, SMART Notebook software and a video camcorder. The laboratory uses primarily the Global Specialties PB-505 trainer and a Tenma 20MHz Oscilloscope that are provided to the students for the duration of the course and then returned. Testing is performed using Course Blackboard course management software.

  8. Casimage project: a digital teaching files authoring environment.

    PubMed

    Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman

    2004-04-01

    The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.

  9. 106-17 Telemetry Management Resources Chapter 25

    DTIC Science & Technology

    2017-07-01

    aspects of the TmNS system . There are two primary protocols for accessing the management resources: Simple Network Management Protocol (SNMP) and... management resources as well as a basic HTTP clients and servers for a more RESTful approach to system management . Both tools are available from the...Telemetry Standards, RCC Standard 106-17 Chapter 25, July 2017 i CHAPTER 25 Management Resources Acronyms

  10. Telnet Cornucopia: Interactive Access to Information Resource Collections.

    ERIC Educational Resources Information Center

    Harris, Judi

    1992-01-01

    Describes various resources accessed through Telnet software that are available through the Internet for use in elementary and secondary education. Highlights include the National Public Telecomputing Network's Cleveland Freenet; Washington University's SERVICES gateway; and Wide Area Information Server (WAIS) programs. (LRW)

  11. Resource Allocation in Dynamic Environments

    DTIC Science & Technology

    2012-10-01

    Utility Curve for the TOC Camera 42 Figure 20: Utility Curves for Ground Vehicle Camera and Squad Camera 43 Figure 21: Facial - Recognition Utility...A Facial - Recognition Server (FRS) can receive images from smartphones the squads use, compare them to a local database, and then return the...fallback. In addition, each squad has the ability to capture images with a smartphone and send them to a Facial - Recognition Server in the TOC to

  12. Development of a user-friendly system for image processing of electron microscopy by integrating a web browser and PIONE with Eos.

    PubMed

    Tsukamoto, Takafumi; Yasunaga, Takuo

    2014-11-01

    Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. 78 FR 77739 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ... services allow Users to rent space in the data center so they may locate their electronic servers in close... User is able to request a physical cabinet to house its servers and other equipment in the data center... Exchange enter the Exchange's trading and execution systems through the same order gateway, regardless of...

  14. Earth Science Data Grid System

    NASA Astrophysics Data System (ADS)

    Chi, Y.; Yang, R.; Kafatos, M.

    2004-12-01

    The Earth Science Data Grid System (ESDGS) is a software in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We are also developing additional services of 1) metadata management, 2) geospatial, temporal, and content-based indexing, and 3) near/on site data processing, in response to the unique needs of Earth science applications. In this paper, we will describe the software architecture and components of the system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.

  15. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less

  16. Planning and management of cloud computing networks

    NASA Astrophysics Data System (ADS)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a comprehensive vision. The first question to be solved is what are the optimal data center locations. We found that the location of each data center has a big impact on cost, QoS, power consumption, and greenhouse gas emissions. An optimization problem with a multi-criteria objective function is proposed to decide jointly the optimal location of data centers and software components, link capacities, and information routing. Once the network planning has been analyzed, the problem of dynamic resource provisioning in real time is addressed. In this context, virtualization is a key technique in cloud computing because each server can be shared by multiple Virtual Machines (VMs) and the total power consumption can be reduced. In the same line of location problems, we propose a Green Cloud Broker that optimizes VM placement across multiple data centers. In fact, when multiple data centers are considered, response time can be reduced by placing VMs close to users, cost can be minimized, power consumption can be optimized by using energy efficient data centers, and CO2 emissions can be decreased by choosing data centers provided with renewable energy sources. The third stage of the analysis is the short-term management of a cloud data center. In particular, a method is proposed to assign VMs to servers by considering communication traffic among VMs. Cloud data centers receive new applications over time and these applications need on-demand resource provisioning. Each application is composed of multiple types of VMs that interact among themselves. A program called scheduler must place each new VM in a server and that impacts the QoS and power consumption. Our method places VMs that communicate among themselves in servers that are close to each other in the network topology, thus reducing communication delay and increasing the throughput available among VMs. Furthermore, the power consumption of each type of server is considered and the most efficient ones are chosen to place the VMs. The number of VMs of each application can be dynamically changed to match the workload and servers not needed in a particular period can be suspended to save energy. The methodology developed is based on Mixed Integer Programming (MIP) models to formalize the problems and use state of the art optimization solvers. Then, heuristics are developed to solve cases with more than 1,000 potential data center locations for the planning problem, 1,000 nodes for the cloud broker, and 128,000 servers for the VM placement problem. Solutions with very short optimality gaps, between 0% and 1.95%, are obtained, and execution time in the order of minutes for the planning problem and less than a second for real time cases. We consider that this thesis on resource provisioning of cloud computing networks includes important contributions on this research area, and innovative commercial applications based on the proposed methods have promising future.

  17. BioBarcode: a general DNA barcoding database and server platform for Asian biodiversity resources

    PubMed Central

    2009-01-01

    Background DNA barcoding provides a rapid, accurate, and standardized method for species-level identification using short DNA sequences. Such a standardized identification method is useful for mapping all the species on Earth, particularly when DNA sequencing technology is cheaply available. There are many nations in Asia with many biodiversity resources that need to be mapped and registered in databases. Results We have built a general DNA barcode data processing system, BioBarcode, with open source software - which is a general purpose database and server. It uses mySQL RDBMS 5.0, BLAST2, and Apache httpd server. An exemplary database of BioBarcode has around 11,300 specimen entries (including GenBank data) and registers the biological species to map their genetic relationships. The BioBarcode database contains a chromatogram viewer which improves the performance in DNA sequence analyses. Conclusion Asia has a very high degree of biodiversity and the BioBarcode database server system aims to provide an efficient bioinformatics protocol that can be freely used by Asian researchers and research organizations interested in DNA barcoding. The BioBarcode promotes the rapid acquisition of biological species DNA sequence data that meet global standards by providing specialized services, and provides useful tools that will make barcoding cheaper and faster in the biodiversity community such as standardization, depository, management, and analysis of DNA barcode data. The system can be downloaded upon request, and an exemplary server has been constructed with which to build an Asian biodiversity system http://www.asianbarcode.org. PMID:19958506

  18. Common Ground: An Interactive Visual Exploration and Discovery for Complex Health Data

    DTIC Science & Technology

    2014-04-01

    annotate other ontologies for the visual interface client. Finally, we are actively working on software development of both a backend server and the...the following infrastructure and resources. For the development and management of the ontologies, we installed a framework consisting of a server...that is being developed by Google. Using these 9 technologies, we developed an HTML5 client that runs on Windows, Mac OSX, Linux and mobile systems

  19. Towards optimizing server performance in an educational MMORPG for teaching computer programming

    NASA Astrophysics Data System (ADS)

    Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios

    2013-10-01

    Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.

  20. 78 FR 77765 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ...-location services allow Users to rent space in the data center so they may locate their electronic servers... Cabinets A User is able to request a physical cabinet to house its servers and other equipment in the data... order gateway, regardless of whether the sender is co-located in the data center or not. In addition, co...

  1. Mobile Care (Moca) for Remote Diagnosis and Screening

    PubMed Central

    Celi, Leo Anthony; Sarmenta, Luis; Rotberg, Jhonathan; Marcelo, Alvin; Clifford, Gari

    2010-01-01

    Moca is a cell phone-facilitated clinical information system to improve diagnostic, screening and therapeutic capabilities in remote resource-poor settings. The software allows transmission of any medical file, whether a photo, x-ray, audio or video file, through a cell phone to (1) a central server for archiving and incorporation into an electronic medical record (to facilitate longitudinal care, quality control, and data mining), and (2) a remote specialist for real-time decision support (to leverage expertise). The open source software is designed as an end-to-end clinical information system that seamlessly connects health care workers to medical professionals. It is integrated with OpenMRS, an existing open source medical records system commonly used in developing countries. PMID:21822397

  2. The distributed annotation system.

    PubMed

    Dowell, R D; Jokerst, R M; Day, A; Eddy, S R; Stein, L

    2001-01-01

    Currently, most genome annotation is curated by centralized groups with limited resources. Efforts to share annotations transparently among multiple groups have not yet been satisfactory. Here we introduce a concept called the Distributed Annotation System (DAS). DAS allows sequence annotations to be decentralized among multiple third-party annotators and integrated on an as-needed basis by client-side software. The communication between client and servers in DAS is defined by the DAS XML specification. Annotations are displayed in layers, one per server. Any client or server adhering to the DAS XML specification can participate in the system; we describe a simple prototype client and server example. The DAS specification is being used experimentally by Ensembl, WormBase, and the Berkeley Drosophila Genome Project. Continued success will depend on the readiness of the research community to adopt DAS and provide annotations. All components are freely available from the project website http://www.biodas.org/.

  3. ArachnoServer 3.0: an online resource for automated discovery, analysis and annotation of spider toxins.

    PubMed

    Pineda, Sandy S; Chaumeil, Pierre-Alain; Kunert, Anne; Kaas, Quentin; Thang, Mike W C; Le, Lien; Nuhn, Michael; Herzig, Volker; Saez, Natalie J; Cristofori-Armstrong, Ben; Anangi, Raveendra; Senff, Sebastian; Gorse, Dominique; King, Glenn F

    2018-03-15

    ArachnoServer is a manually curated database that consolidates information on the sequence, structure, function and pharmacology of spider-venom toxins. Although spider venoms are complex chemical arsenals, the primary constituents are small disulfide-bridged peptides that target neuronal ion channels and receptors. Due to their high potency and selectivity, these peptides have been developed as pharmacological tools, bioinsecticides and drug leads. A new version of ArachnoServer (v3.0) has been developed that includes a bioinformatics pipeline for automated detection and analysis of peptide toxin transcripts in assembled venom-gland transcriptomes. ArachnoServer v3.0 was updated with the latest sequence, structure and functional data, the search-by-mass feature has been enhanced, and toxin cards provide additional information about each mature toxin. http://arachnoserver.org. support@arachnoserver.org. Supplementary data are available at Bioinformatics online.

  4. From sequencer to supercomputer: an automatic pipeline for managing and processing next generation sequencing data.

    PubMed

    Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun

    2012-01-01

    Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.

  5. The Stanford MediaServer Project: strategies for building a flexible digital media platform to support biomedical education and research.

    PubMed Central

    Durack, Jeremy C.; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P.; Dev, Parvati

    2002-01-01

    Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research. PMID:12463820

  6. The Stanford MediaServer Project: strategies for building a flexible digital media platform to support biomedical education and research.

    PubMed

    Durack, Jeremy C; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P; Dev, Parvati

    2002-01-01

    Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research.

  7. Reference-frame-independent quantum-key-distribution server with a telecom tether for an on-chip client.

    PubMed

    Zhang, P; Aungskunsiri, K; Martín-López, E; Wabnig, J; Lobino, M; Nock, R W; Munns, J; Bonneau, D; Jiang, P; Li, H W; Laing, A; Rarity, J G; Niskanen, A O; Thompson, M G; O'Brien, J L

    2014-04-04

    We demonstrate a client-server quantum key distribution (QKD) scheme. Large resources such as laser and detectors are situated at the server side, which is accessible via telecom fiber to a client requiring only an on-chip polarization rotator, which may be integrated into a handheld device. The detrimental effects of unstable fiber birefringence are overcome by employing the reference-frame-independent QKD protocol for polarization qubits in polarization maintaining fiber, where standard QKD protocols fail, as we show for comparison. This opens the way for quantum enhanced secure communications between companies and members of the general public equipped with handheld mobile devices, via telecom-fiber tethering.

  8. ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.

    PubMed

    Maghrabi, Ali H A; McGuffin, Liam J

    2017-07-03

    Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Virtualizing access to scientific applications with the Application Hosting Environment

    NASA Astrophysics Data System (ADS)

    Zasada, S. J.; Coveney, P. V.

    2009-12-01

    The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.

  10. COPRED: prediction of fold, GO molecular function and functional residues at the domain level.

    PubMed

    López, Daniel; Pazos, Florencio

    2013-07-15

    Only recently the first resources devoted to the functional annotation of proteins at the domain level started to appear. The next step is to develop specific methodologies for predicting function at the domain level based on these resources, and to implement them in web servers to be used by the community. In this work, we present COPRED, a web server for the concomitant prediction of fold, molecular function and functional sites at the domain level, based on a methodology for domain molecular function prediction and a resource of domain functional annotations previously developed and benchmarked. COPRED can be freely accessed at http://csbg.cnb.csic.es/copred. The interface works in all standard web browsers. WebGL (natively supported by most browsers) is required for the in-line preview and manipulation of protein 3D structures. The website includes a detailed help section and usage examples. pazos@cnb.csic.es.

  11. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.

    PubMed

    Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-09-23

    SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs.

  12. GPCR & company: databases and servers for GPCRs and interacting partners.

    PubMed

    Kowalsman, Noga; Niv, Masha Y

    2014-01-01

    G-protein-coupled receptors (GPCRs) are a large superfamily of membrane receptors that are involved in a wide range of signaling pathways. To fulfill their tasks, GPCRs interact with a variety of partners, including small molecules, lipids and proteins. They are accompanied by different proteins during all phases of their life cycle. Therefore, GPCR interactions with their partners are of great interest in basic cell-signaling research and in drug discovery.Due to the rapid development of computers and internet communication, knowledge and data can be easily shared within the worldwide research community via freely available databases and servers. These provide an abundance of biological, chemical and pharmacological information.This chapter describes the available web resources for investigating GPCR interactions. We review about 40 freely available databases and servers, and provide a few sentences about the essence and the data they supply. For simplification, the databases and servers were grouped under the following topics: general GPCR-ligand interactions; particular families of GPCRs and their ligands; GPCR oligomerization; GPCR interactions with intracellular partners; and structural information on GPCRs. In conclusion, a multitude of useful tools are currently available. Summary tables are provided to ease navigation between the numerous and partially overlapping resources. Suggestions for future enhancements of the online tools include the addition of links from general to specialized databases and enabling usage of user-supplied template for GPCR structural modeling.

  13. Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure

    NASA Astrophysics Data System (ADS)

    Antolik, L.; Shiro, B.; Friberg, P. A.

    2016-12-01

    The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.

  14. Implementation of a next-generation electronic nursing records system based on detailed clinical models and integration of clinical practice guidelines.

    PubMed

    Min, Yul Ha; Park, Hyeoun-Ae; Chung, Eunja; Lee, Hyunsook

    2013-12-01

    The purpose of this paper is to describe the components of a next-generation electronic nursing records system ensuring full semantic interoperability and integrating evidence into the nursing records system. A next-generation electronic nursing records system based on detailed clinical models and clinical practice guidelines was developed at Seoul National University Bundang Hospital in 2013. This system has two components, a terminology server and a nursing documentation system. The terminology server manages nursing narratives generated from entity-attribute-value triplets of detailed clinical models using a natural language generation system. The nursing documentation system provides nurses with a set of nursing narratives arranged around the recommendations extracted from clinical practice guidelines. An electronic nursing records system based on detailed clinical models and clinical practice guidelines was successfully implemented in a hospital in Korea. The next-generation electronic nursing records system can support nursing practice and nursing documentation, which in turn will improve data quality.

  15. PEM public key certificate cache server

    NASA Astrophysics Data System (ADS)

    Cheung, T.

    1993-12-01

    Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.

  16. The Iowa Flood Center's River Stage Sensors—Technical Details

    NASA Astrophysics Data System (ADS)

    Niemeier, J. J.; Kruger, A.; Ceynar, D.; Fahim Rezaei, H.

    2012-12-01

    The Iowa Flood Center (IFC), along with support from the Iowa Department of Transportation (DOT) and the Iowa Department of Natural Resources (DNR) have developed a bridge-mounted river stage sensor. Each sensor consists of an ultrasonic distance measuring module, cellular modem, a GPS unit that provides accurate time and an embedded controller that orchestrates the sensors' operation. A sensor is powered by a battery and solar panel along with a solar charge controller. All the components are housed in/on a sturdy metal box that is then mounted on the side of a bridge. Additionally, each sensor incorporates a water-intrusion sensor and an internal temperature sensor. In operation, the microcontroller wakes, and turns on the electronics every 15 minutes and then measures the distance between the ultrasonic sensor and the water surface. Several measurements are averaged and transmitted along with system health information (battery voltage, state of water intrusion sensor, and internal temperature) via cellular modem to remote servers on the internet. The microcontroller then powers the electronics down and enters a sleep/power savings mode. The sensor's firmware allows the remote server to adjust the measurement rate to 5, 15, and 60 minutes. Further, sensors maintain a 24-day buffer of previous measurements. If a sensor could not successfully transmit its data because of cellular network connection problems, it will transmit the backlog on subsequent transmissions. We paid meticulous attention to all engineering aspects and sensors are very robust and have operated essentially continuously through two Iowa winters and summers, including the 2012 record-breaking warm summer.

  17. HOED: Hypermedia Online Educational Database.

    ERIC Educational Resources Information Center

    Duval, E.; Olivie, H.

    This paper presents HOED, a distributed hypermedia client-server system for educational resources. The aim of HOED is to provide a library facility for hyperdocuments that is accessible via the world wide web. Its main application domain is education. The HOED database not only holds the educational resources themselves, but also data describing…

  18. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1991-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.

  19. CINTEX: International Interoperability Extensions to EOSDIS

    NASA Technical Reports Server (NTRS)

    Graves, Sara J.

    1997-01-01

    A large part of the research under this cooperative agreement involved working with representatives of the DLR, NASDA, EDC, and NOAA-SAA data centers to propose a set of enhancements and additions to the EOSDIS Version 0 Information Management System (V0 IMS) Client/Server Message Protocol. Helen Conover of ITSL led this effort to provide for an additional geographic search specification (WRS Path/Row), data set- and data center-specific search criteria, search by granule ID, specification of data granule subsetting requests, data set-based ordering, and the addition of URLs to result messages. The V0 IMS Server Cookbook is an evolving document, providing resources and information to data centers setting up a VO IMS Server. Under this Cooperative Agreement, Helen Conover revised, reorganized, and expanded this document, and converted it to HTML. Ms. Conover has also worked extensively with the IRE RAS data center, CPSSI, in Russia. She served as the primary IMS contact for IRE-CPSSI and as IRE-CPSSI's liaison to other members of IMS and Web Gateway (WG) development teams. Her documentation of IMS problems in the IRE environment (Sun servers and low network bandwidth) led to a general restructuring of the V0 IMS Client message polling system. to the benefit of all IMS participants. In addition to the IMS server software and documentation. which are generally available to CINTEX sites, Ms. Conover also provided database design documentation and consulting, order tracking software, and hands-on testing and debug assistance to IRE. In the final pre-operational phase of IRE-CPSSI development, she also supplied information on configuration management, including ideas and processes in place at the Global Hydrology Resource Center (GHRC), an EOSDIS data center operated by ITSL.

  20. Data sharing system for lithography APC

    NASA Astrophysics Data System (ADS)

    Kawamura, Eiichi; Teranishi, Yoshiharu; Shimabara, Masanori

    2007-03-01

    We have developed a simple and cost-effective data sharing system between fabs for lithography advanced process control (APC). Lithography APC requires process flow, inter-layer information, history information, mask information and so on. So, inter-APC data sharing system has become necessary when lots are to be processed in multiple fabs (usually two fabs). The development cost and maintenance cost also have to be taken into account. The system handles minimum information necessary to make trend prediction for the lots. Three types of data have to be shared for precise trend prediction. First one is device information of the lots, e.g., process flow of the device and inter-layer information. Second one is mask information from mask suppliers, e.g., pattern characteristics and pattern widths. Last one is history data of the lots. Device information is electronic file and easy to handle. The electronic file is common between APCs and uploaded into the database. As for mask information sharing, mask information described in common format is obtained via Wide Area Network (WAN) from mask-vender will be stored in the mask-information data server. This information is periodically transferred to one specific lithography-APC server and compiled into the database. This lithography-APC server periodically delivers the mask-information to every other lithography-APC server. Process-history data sharing system mainly consists of function of delivering process-history data. In shipping production lots to another fab, the product-related process-history data is delivered by the lithography-APC server from the shipping site. We have confirmed the function and effectiveness of data sharing systems.

  1. Electron tunneling in proteins program.

    PubMed

    Hagras, Muhammad A; Stuchebrukhov, Alexei A

    2016-06-05

    We developed a unique integrated software package (called Electron Tunneling in Proteins Program or ETP) which provides an environment with different capabilities such as tunneling current calculation, semi-empirical quantum mechanical calculation, and molecular modeling simulation for calculation and analysis of electron transfer reactions in proteins. ETP program is developed as a cross-platform client-server program in which all the different calculations are conducted at the server side while only the client terminal displays the resulting calculation outputs in the different supported representations. ETP program is integrated with a set of well-known computational software packages including Gaussian, BALLVIEW, Dowser, pKip, and APBS. In addition, ETP program supports various visualization methods for the tunneling calculation results that assist in a more comprehensive understanding of the tunneling process. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Designing communication and remote controlling of virtual instrument network system

    NASA Astrophysics Data System (ADS)

    Lei, Lin; Wang, Houjun; Zhou, Xue; Zhou, Wenjian

    2005-01-01

    In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful.

  3. CTserver: A Computational Thermodynamics Server for the Geoscience Community

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2006-12-01

    The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed architecture involves CFD computation of magma convection at Volcan Villarrica with magma properties and phase proportions calculated at each spatial node and at each time step via distributed function calls to MELTS-objects executing on the CTserver. Documentation and programming examples are provided at http://ctserver.ofm- research.org.

  4. Pre-Clinical and Clinical Evaluation of High Resolution, Mobile Gamma Camera and Positron Imaging Devices

    DTIC Science & Technology

    2007-11-01

    accuracy. FPGA ADC data acquisition is controlled by distributed Java -based software. Java -based server application sits on each of the acquisition...JNI ( Java Native Interface) is used to allow Java indirect control of the USB driver. Fig. 5. Photograph of mobile electronics rack...supplies with the monitor and keyboard. The server application on each of these machines is controlled by a remote client Java -based application

  5. A Standardized Interface for Obtaining Digital Planetary and Heliophysics Time Series Data

    NASA Astrophysics Data System (ADS)

    Vandegriff, Jon; Weigel, Robert; Faden, Jeremy; King, Todd; Candey, Robert

    2016-10-01

    We describe a low level interface for accessing digital Planetary and Heliophysics data, focusing primarily on time-series data from in-situ instruments. As the volume and variety of planetary data has increased, it has become harder to merge diverse datasets into a common analysis environment. Thus we are building low-level computer-to-computer infrastructure to enable data from different missions or archives to be able to interoperate. The key to enabling interoperability is a simple access interface that standardizes the common capabilities available from any data server: 1. identify the data resources that can be accessed; 2. describe each resource; and 3. get the data from a resource. We have created a standardized way for data servers to perform each of these three activities. We are also developing a standard streaming data format for the actual data content to be returned (i.e., the result of item 3). Our proposed standard access interface is simple enough that it could be implemented on top of or beside existing data services, or it could even be fully implemented by a small data provider as a way to ensure that the provider's holdings can participate in larger data systems or joint analysis with other datasets. We present details of the interface and of the streaming format, including a sample server designed to illustrate the data request and streaming capabilities.

  6. CMD: a Cotton Microsatellite Database resource for Gossypium genomics

    PubMed Central

    Blenda, Anna; Scheffler, Jodi; Scheffler, Brian; Palmer, Michael; Lacape, Jean-Marc; Yu, John Z; Jesudurai, Christopher; Jung, Sook; Muthukumar, Sriram; Yellambalase, Preetham; Ficklin, Stephen; Staton, Margaret; Eshelman, Robert; Ulloa, Mauricio; Saha, Sukumar; Burr, Ben; Liu, Shaolin; Zhang, Tianzhen; Fang, Deqiu; Pepper, Alan; Kumpatla, Siva; Jacobs, John; Tomkins, Jeff; Cantrell, Roy; Main, Dorrie

    2006-01-01

    Background The Cotton Microsatellite Database (CMD) is a curated and integrated web-based relational database providing centralized access to publicly available cotton microsatellites, an invaluable resource for basic and applied research in cotton breeding. Description At present CMD contains publication, sequence, primer, mapping and homology data for nine major cotton microsatellite projects, collectively representing 5,484 microsatellites. In addition, CMD displays data for three of the microsatellite projects that have been screened against a panel of core germplasm. The standardized panel consists of 12 diverse genotypes including genetic standards, mapping parents, BAC donors, subgenome representatives, unique breeding lines, exotic introgression sources, and contemporary Upland cottons with significant acreage. A suite of online microsatellite data mining tools are accessible at CMD. These include an SSR server which identifies microsatellites, primers, open reading frames, and GC-content of uploaded sequences; BLAST and FASTA servers providing sequence similarity searches against the existing cotton SSR sequences and primers, a CAP3 server to assemble EST sequences into longer transcripts prior to mining for SSRs, and CMap, a viewer for comparing cotton SSR maps. Conclusion The collection of publicly available cotton SSR markers in a centralized, readily accessible and curated web-enabled database provides a more efficient utilization of microsatellite resources and will help accelerate basic and applied research in molecular breeding and genetic mapping in Gossypium spp. PMID:16737546

  7. Mediation and the Electronic World.

    ERIC Educational Resources Information Center

    Swan, John; And Others

    1992-01-01

    Three articles discuss the issue of the mediator's role in the library of the electronic age. Topics addressed include computer-assisted instruction; online catalogs; computer networks; professional identity; reference service and bibliographic instruction; CD-ROMs; online systems; personal home microcomputers; Internet and list servers;…

  8. Disaster recovery plan for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, D.E.

    The BMS production implementation will be complete by October 1, 1998 and the server environment will be comprised of two types of platforms. The PassPort Supply and the PeopleSoft Financials will reside on LNIX servers and the PeopleSoft Human Resources and Payroll will reside on Microsoft NT servers. Because of the wide scope and the requirements of the COTS products to run in various environments backup and recovery responsibilities are divided between two groups in Technical Operations. The Central Computer Systems Management group provides support for the LTNIX/NT Backup Data Center, and the Network Infrastructure Systems group provides support formore » the NT Application Server Backup outside the Data Center. The disaster recovery process is dependent on a good backup and recovery process. Information and integrated system data for determining the disaster recovery process is identified from the Fluor Daniel Hanford (FDH) Risk Assessment Plan, Contingency Plan, and Backup and Recovery Plan, and Backup Form for HANDI 2000 BMS.« less

  9. Pathview Web: user friendly pathway visualization and data integration

    PubMed Central

    Pant, Gaurav; Bhavnasi, Yeshvant K.; Blanchard, Steven G.; Brouwer, Cory

    2017-01-01

    Abstract Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. PMID:28482075

  10. RaptorX server: a resource for template-based protein structure modeling.

    PubMed

    Källberg, Morten; Margaryan, Gohar; Wang, Sheng; Ma, Jianzhu; Xu, Jinbo

    2014-01-01

    Assigning functional properties to a newly discovered protein is a key challenge in modern biology. To this end, computational modeling of the three-dimensional atomic arrangement of the amino acid chain is often crucial in determining the role of the protein in biological processes. We present a community-wide web-based protocol, RaptorX server ( http://raptorx.uchicago.edu ), for automated protein secondary structure prediction, template-based tertiary structure modeling, and probabilistic alignment sampling.Given a target sequence, RaptorX server is able to detect even remotely related template sequences by means of a novel nonlinear context-specific alignment potential and probabilistic consistency algorithm. Using the protocol presented here it is thus possible to obtain high-quality structural models for many target protein sequences when only distantly related protein domains have experimentally solved structures. At present, RaptorX server can perform secondary and tertiary structure prediction of a 200 amino acid target sequence in approximately 30 min.

  11. PhysiomeSpace: digital library service for biomedical data

    PubMed Central

    Testi, Debora; Quadrani, Paolo; Viceconti, Marco

    2010-01-01

    Every research laboratory has a wealth of biomedical data locked up, which, if shared with other experts, could dramatically improve biomedical and healthcare research. With the PhysiomeSpace service, it is now possible with a few clicks to share with selected users biomedical data in an easy, controlled and safe way. The digital library service is managed using a client–server approach. The client application is used to import, fuse and enrich the data information according to the PhysiomeSpace resource ontology and upload/download the data to the library. The server services are hosted on the Biomed Town community portal, where through a web interface, the user can complete the metadata curation and share and/or publish the data resources. A search service capitalizes on the domain ontology and on the enrichment of metadata for each resource, providing a powerful discovery environment. Once the users have found the data resources they are interested in, they can add them to their basket, following a metaphor popular in e-commerce web sites. When all the necessary resources have been selected, the user can download the basket contents into the client application. The digital library service is now in beta and open to the biomedical research community. PMID:20478910

  12. PhysiomeSpace: digital library service for biomedical data.

    PubMed

    Testi, Debora; Quadrani, Paolo; Viceconti, Marco

    2010-06-28

    Every research laboratory has a wealth of biomedical data locked up, which, if shared with other experts, could dramatically improve biomedical and healthcare research. With the PhysiomeSpace service, it is now possible with a few clicks to share with selected users biomedical data in an easy, controlled and safe way. The digital library service is managed using a client-server approach. The client application is used to import, fuse and enrich the data information according to the PhysiomeSpace resource ontology and upload/download the data to the library. The server services are hosted on the Biomed Town community portal, where through a web interface, the user can complete the metadata curation and share and/or publish the data resources. A search service capitalizes on the domain ontology and on the enrichment of metadata for each resource, providing a powerful discovery environment. Once the users have found the data resources they are interested in, they can add them to their basket, following a metaphor popular in e-commerce web sites. When all the necessary resources have been selected, the user can download the basket contents into the client application. The digital library service is now in beta and open to the biomedical research community.

  13. Electronic Mail (E-Mail) Management and Use

    DTIC Science & Technology

    1999-03-01

    age or date of birth ; present or 14 AFI33-119 1 MARCH 1999future assignments for overseas, or for routinely deployable or sensitive units; and office...Server Naming Convention. The DMS-AF primary (e.g., ESL® Primary, Lotus ® Hub, and Microsoft® Bridgehead) server and its backup must conform to the 8...determine if there is some benefit to establishing standard lower-level folder names which bases should adhere to. Standard public folders offer a

  14. NOAA Operational Model Archive Distribution System (NOMADS): High Availability Applications for Reliable Real Time Access to Operational Model Data

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Wang, J.

    2009-12-01

    To reduce the impact of natural hazards and environmental changes, the National Centers for Environmental Prediction (NCEP) provide first alert and a preferred partner for environmental prediction services, and represents a critical national resource to operational and research communities affected by climate, weather and water. NOMADS is now delivering high availability services as part of NOAA’s official real time data dissemination at its Web Operations Center (WOC) server. The WOC is a web service used by organizational units in and outside NOAA, and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The user (client) executes what is efficient to execute on the client and the server efficiently provides format independent access services. Client applications can execute on the server, if it is desired, but the same program can be executed on the client side with no loss of efficiency. In this way this paradigm lends itself to aggregation servers that act as servers of servers listing, searching catalogs of holdings, data mining, and updating information from the metadata descriptions that enable collections of data in disparate places to be simultaneously accessed, with results processed on servers and clients to produce a needed answer. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. We demonstrate how users can use NOMADS services to select the values of Ensemble model runs over the ith Ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6-Dimensional data cube of access across the internet. The example application called the “Ensemble Probability Tool” make probability predictions of user defined weather events that can be used in remote areas for weather vulnerable circumstances. An application to access data for a verification pilot study is shown in detail in a companion paper (U06) collaboration with the World Bank and is an example of high value, usability and relevance of NCEP products and service capability over a wide spectrum of user and partner needs.

  15. The ChIP-Seq tools and web server: a resource for analyzing ChIP-seq and other types of genomic data.

    PubMed

    Ambrosini, Giovanna; Dreos, René; Kumar, Sunil; Bucher, Philipp

    2016-11-18

    ChIP-seq and related high-throughput chromatin profilig assays generate ever increasing volumes of highly valuable biological data. To make sense out of it, biologists need versatile, efficient and user-friendly tools for access, visualization and itegrative analysis of such data. Here we present the ChIP-Seq command line tools and web server, implementing basic algorithms for ChIP-seq data analysis starting with a read alignment file. The tools are optimized for memory-efficiency and speed thus allowing for processing of large data volumes on inexpensive hardware. The web interface provides access to a large database of public data. The ChIP-Seq tools have a modular and interoperable design in that the output from one application can serve as input to another one. Complex and innovative tasks can thus be achieved by running several tools in a cascade. The various ChIP-Seq command line tools and web services either complement or compare favorably to related bioinformatics resources in terms of computational efficiency, ease of access to public data and interoperability with other web-based tools. The ChIP-Seq server is accessible at http://ccg.vital-it.ch/chipseq/ .

  16. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  17. NASAwide electronic publishing system: Prototype STI electronic document distribution, stage-4 evaluation report

    NASA Technical Reports Server (NTRS)

    Tuey, Richard C.; Collins, Mary; Caswell, Pamela; Haynes, Bob; Nelson, Michael L.; Holm, Jeanne; Buquo, Lynn; Tingle, Annette; Cooper, Bill; Stiltner, Roy

    1996-01-01

    This evaluation report contains an introduction, seven chapters, and five appendices. The Introduction describes the purpose, conceptual frame work, functional description, and technical report server of the STI Electronic Document Distribution (EDD) project. Chapter 1 documents the results of the prototype STI EDD in actual operation. Chapter 2 documents each NASA center's post processing publication processes. Chapter 3 documents each center's STI software, hardware, and communications configurations. Chapter 7 documents STI EDD policy, practices, and procedures. The appendices, which arc contained in Part 2 of this document, consist of (1) STI EDD Project Plan, (2) Team members, (3) Phasing Schedules, (4) Accessing On-line Reports, and (5) Creating an HTML File and Setting Up an xTRS. In summary, Stage 4 of the NASAwide Electronic Publishing System is the final phase of its implementation through the prototyping and gradual integration of each NASA center's electronic printing systems, desktop publishing systems, and technical report servers to be able to provide to NASA's engineers, researchers, scientists, and external users the widest practicable and appropriate dissemination of information concerning its activities and the result thereof to their work stations.

  18. NASAwide electronic publishing system-prototype STI electronic document distribution: Stage-4 evaluation report. Part 2; Appendices

    NASA Technical Reports Server (NTRS)

    Tuey, Richard C.; Collins, Mary; Caswell, Pamela; Haynes, Bob; Nelson, Michael L.; Holm, Jeanne; Buquo, Lynn; Tingle, Annette; Cooper, Bill; Stiltner, Roy

    1996-01-01

    This evaluation report contains an introduction, seven chapters, and five appendices. The Introduction describes the purpose, conceptual framework, functional description, and technical report server of the Scientific and Technical Information (STI) Electronic Document Distribution (EDD) project. Chapter 1 documents the results of the prototype STI EDD in actual operation. Chapter 2 documents each NASA center's post processing publication processes. Chapter 3 documents each center's STI software, hardware. and communications configurations. Chapter 7 documents STI EDD policy, practices, and procedures. The appendices consist of (A) the STI EDD Project Plan, (B) Team members, (C) Phasing Schedules, (D) Accessing On-line Reports, and (E) Creating an HTML File and Setting Up an xTRS. In summary, Stage 4 of the NASAwide Electronic Publishing System is the final phase of its implementation through the prototyping and gradual integration of each NASA center's electronic printing systems, desk top publishing systems, and technical report servers, to be able to provide to NASA's engineers, researchers, scientists, and external users, the widest practicable and appropriate dissemination of information concerning its activities and the result thereof to their work stations.

  19. Identifying Effectiveness Criteria for Internet Payment Systems.

    ERIC Educational Resources Information Center

    Shon, Tae-Hwan; Swatman, Paula M. C.

    1998-01-01

    Examines Internet payment systems (IPS): third-party, card, secure Web server, electronic token, financial electronic data interchange (EDI), and micropayment based. Reports the results of a Delphi survey of experts identifying and classifying IPS effectiveness criteria and classifying types of IPS providers. Includes the survey invitation letter…

  20. A virtual university Web system for a medical school.

    PubMed

    Séka, L P; Duvauferrier, R; Fresnel, A; Le Beux, P

    1998-01-01

    This paper describes a Virtual Medical University Web Server. This project started in 1994 by the development of the French Radiology Server. The main objective of our Medical Virtual University is to offer not only an initial training (for students) but also the Continuing Professional Education (for practitioners). Our system is based on electronic textbooks, clinical cases (around 4000) and a medical knowledge base called A.D.M. ("Aide au Diagnostic Medical"). We have indexed all electronic textbooks and clinical cases according to the ADM base in order to facilitate the navigation on the system. This system base is supported by a relational database management system. The Virtual Medical University, available on the Web Internet, is presently in the process of external evaluations.

  1. The design of the automated control system for warehouse equipment under radio-electronic manufacturing

    NASA Astrophysics Data System (ADS)

    Kapulin, D. V.; Chemidov, I. V.; Kazantsev, M. A.

    2017-01-01

    In the paper, the aspects of design, development and implementation of the automated control system for warehousing under the manufacturing process of the radio-electronic enterprise JSC «Radiosvyaz» are discussed. The architecture of the automated control system for warehousing proposed in the paper consists of a server which is connected to the physically separated information networks: the network with a database server, which stores information about the orders for picking, and the network with the automated storage and retrieval system. This principle allows implementing the requirements for differentiation of access, ensuring the information safety and security requirements. Also, the efficiency of the developed automated solutions in terms of optimizing the warehouse’s logistic characteristics is researched.

  2. The USGODAE Monterey Data Server

    NASA Astrophysics Data System (ADS)

    Sharfstein, P.; Dimitriou, D.; Hankin, S.

    2005-12-01

    The USGODAE Monterey Data Server (http://www.usgodae.org/) has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. The server is operated with oversight and funding from the Office of Naval Research (ONR). Support of the GODAE Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the GODAE server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data available from the Global Telecommunications System (GTS) and other FTP sites, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. It supports GODAE participants, as well as the broader oceanographic research community, and is becoming a significant node in the international GODAE program. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Presenting data with a consistent interface and ensuring its availability in the maximum number of standard formats is one of the primary challenges in hosting the many diverse formats and broad range of data used by the GODAE community. To this end, all USGODAE data sets are available in their original format via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and GODAE Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC) specifications. USGODAE serves FNMOC GRIB files from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) as OPeNDAP data sets using the GrADS Data Server (GDS). The server also provides several FNMOC custom IEEE binary format high resolution ocean analysis products and model outputs through GDS. These data sets are also made available through LAS. The Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo temperature/salinity profiling float data. The Argo collection includes all available Delayed-Mode (scientific quality controlled and corrected) data. USGODAE Argo data are served through OPeNDAP and LAS, which provide complete integration of the Argo data set into NVODS and the IOOS/DMAC. By providing researchers flexible, easy access to data through standard Internet and oceanographic interfaces, the USGODAE Monterey Data Server has become an invaluable resource for oceanographic research. Also, by promoting the community data serving projects, USGODAE strengthens the community and helps to advance the data serving standards.

  3. Casting the Net: The Development of a Resource Collection for an Internet Database.

    ERIC Educational Resources Information Center

    McKiernan, Gerry

    CyberStacks(sm), a demonstration prototype World Wide Web information service, was established on the home page server at Iowa State University with the intent of facilitating identification and use of significant Internet resources in science and technology. CyberStacks(sm) was created in response to perceived deficiencies in early efforts to…

  4. Flexible opto-electronics enabled microfluidics systems with cloud connectivity for point-of-care micronutrient analysis.

    PubMed

    Lee, Stephen; Aranyosi, A J; Wong, Michelle D; Hong, Ji Hyung; Lowe, Jared; Chan, Carol; Garlock, David; Shaw, Scott; Beattie, Patrick D; Kratochvil, Zachary; Kubasti, Nick; Seagers, Kirsten; Ghaffari, Roozbeh; Swanson, Christina D

    2016-04-15

    In developing countries, the deployment of medical diagnostic technologies remains a challenge because of infrastructural limitations (e.g. refrigeration, electricity), and paucity of health professionals, distribution centers and transportation systems. Here we demonstrate the technical development and clinical testing of a novel electronics enabled microfluidic paper-based analytical device (EE-μPAD) for quantitative measurement of micronutrient concentrations in decentralized, resource-limited settings. The system performs immune-detection using paper-based microfluidics, instrumented with flexible electronics and optoelectronic sensors in a mechanically robust, ultrathin format comparable in size to a credit card. Autonomous self-calibration, plasma separation, flow monitoring, timing and data storage enable multiple devices to be run simultaneously. Measurements are wirelessly transferred to a mobile phone application that geo-tags the data and transmits it to a remote server for real time tracking of micronutrient deficiencies. Clinical tests of micronutrient levels from whole blood samples (n=95) show comparable sensitivity and specificity to ELISA-based tests. These results demonstrate instantaneous acquisition and global aggregation of diagnostics data using a fully integrated point of care system that will enable rapid and distributed surveillance of disease prevalence and geographical progression. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Real-Time Electronic Dashboard Technology and Its Use to Improve Pediatric Radiology Workflow.

    PubMed

    Shailam, Randheer; Botwin, Ariel; Stout, Markus; Gee, Michael S

    The purpose of our study was to create a real-time electronic dashboard in the pediatric radiology reading room providing a visual display of updated information regarding scheduled and in-progress radiology examinations that could help radiologists to improve clinical workflow and efficiency. To accomplish this, a script was set up to automatically send real-time HL7 messages from the radiology information system (Epic Systems, Verona, WI) to an Iguana Interface engine, with relevant data regarding examinations stored in an SQL Server database for visual display on the dashboard. Implementation of an electronic dashboard in the reading room of a pediatric radiology academic practice has led to several improvements in clinical workflow, including decreasing the time interval for radiologist protocol entry for computed tomography or magnetic resonance imaging examinations as well as fewer telephone calls related to unprotocoled examinations. Other advantages include enhanced ability of radiologists to anticipate and attend to examinations requiring radiologist monitoring or scanning, as well as to work with technologists and operations managers to optimize scheduling in radiology resources. We foresee increased utilization of electronic dashboard technology in the future as a method to improve radiology workflow and quality of patient care. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Combining Quick-Turnaround and Batch Workloads at Scale

    NASA Technical Reports Server (NTRS)

    Matthews, Gregory A.

    2012-01-01

    NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.

  7. Using ant colony optimization on the quadratic assignment problem to achieve low energy cost in geo-distributed data centers

    NASA Astrophysics Data System (ADS)

    Osei, Richard

    There are many problems associated with operating a data center. Some of these problems include data security, system performance, increasing infrastructure complexity, increasing storage utilization, keeping up with data growth, and increasing energy costs. Energy cost differs by location, and at most locations fluctuates over time. The rising cost of energy makes it harder for data centers to function properly and provide a good quality of service. With reduced energy cost, data centers will have longer lasting servers/equipment, higher availability of resources, better quality of service, a greener environment, and reduced service and software costs for consumers. Some of the ways that data centers have tried to using to reduce energy costs include dynamically switching on and off servers based on the number of users and some predefined conditions, the use of environmental monitoring sensors, and the use of dynamic voltage and frequency scaling (DVFS), which enables processors to run at different combinations of frequencies with voltages to reduce energy cost. This thesis presents another method by which energy cost at data centers could be reduced. This method involves the use of Ant Colony Optimization (ACO) on a Quadratic Assignment Problem (QAP) in assigning user request to servers in geo-distributed data centers. In this paper, an effort to reduce data center energy cost involves the use of front portals, which handle users' requests, were used as ants to find cost effective ways to assign users requests to a server in heterogeneous geo-distributed data centers. The simulation results indicate that the ACO for Optimal Server Activation and Task Placement algorithm reduces energy cost on a small and large number of users' requests in a geo-distributed data center and its performance increases as the input data grows. In a simulation with 3 geo-distributed data centers, and user's resource request ranging from 25,000 to 25,000,000, the ACO algorithm was able to reduce energy cost on an average of $.70 per second. The ACO for Optimal Server Activation and Task Placement algorithm has proven to work as an alternative or improvement in reducing energy cost in geo-distributed data centers.

  8. PTB’S Time and Frequency Activities in 2006: New DCF77 Electronics, New NTP Servers, and Calibration Activities

    DTIC Science & Technology

    2007-01-01

    PTTI) Meeting ( TWSTFT ) is being routinely performed with several European and US stations. On the initiative of NICT, a TWSTFT link was...During the last 2 years, PTB has upgraded its TWSTFT and GPS capabilities in order to achieve better reliability and robustness against system failures...NTP-server, and, briefly, the calibration of the international time links, i.e. the result of the latest calibration of the TWSTFT links to the USNO

  9. Server-based enterprise collaboration software improves safety and quality in high-volume PET/CT practice.

    PubMed

    McDonald, James E; Kessler, Marcus M; Hightower, Jeremy L; Henry, Susan D; Deloney, Linda A

    2013-12-01

    With increasing volumes of complex imaging cases and rising economic pressure on physician staffing, timely reporting will become progressively challenging. Current and planned iterations of PACS and electronic medical record systems do not offer workflow management tools to coordinate delivery of imaging interpretations with the needs of the patient and ordering physician. The adoption of a server-based enterprise collaboration software system by our Division of Nuclear Medicine has significantly improved our efficiency and quality of service.

  10. Pediatric surgeons on the Internet: a multi-institutional experience.

    PubMed

    Wulkan, M L; Smith, S D; Whalen, T V; Hardin, W D

    1997-04-01

    An estimated 24 million people, or 11% of the North American population over 16 years of age, use the Internet. An estimated 40% of households have computers, and 37 million people have Internet access. The experience of three pediatric surgery Internet sites are reviewed to evaluate current practices and future potential of the Internet to practicing pediatric surgeons. The sites reviewed are the Pediatric Surgery Bulletin Board System (BBS), the Pediatric Surgery List Server, and the Pediatric Surgery Website. Statistics were collected at each site to characterize the number of users, traffic load, topics of interest, and times of peak use. There are currently 79 subscribers to the Pediatric Surgery BBS and 100 subscribers to the Pediatric Surgery List Server. The average user of the BBS is a young man who has placed an average of 52 calls to the BBS since joining. There have been 1413 Internet electronic mail messages sent. Twenty-five percent of the traffic has been related to clinical problems and 5% to research, teaching, and career issues. Traffic at this site has been increasing exponentially with most of the dialogue concentrated on clinical issues and problem cases. In a 3-month period the Pediatric Surgery Website received 16,270 hits. The most commonly accessed areas include an electronic mail directory, case studies, the job board, information on the pediatric surgical residency, and information on upcoming meetings. Pediatric surgeons are exploring the Internet and using available pediatric surgery resources. The scope of professional information available to pediatric surgeons on the Internet is still limited but is increasing rapidly. The Internet will impact the way physicians practice medicine through education and communication.

  11. Support and Dissemination of Teacher-Authored Lesson Plans: a Digital Library for Earth System Education (DLESE) and Geological Society of America (GSA) Collaboration

    NASA Astrophysics Data System (ADS)

    Devaul, H.; Pandya, R. E.; McLelland, C. V.

    2003-12-01

    The Digital Library for Earth System Education (www.dlese.org) and the Geological Society of America (www.geosociety.org) are working together to publish and disseminate teacher-authored Earth science lesson plans. DLESE is a community-based effort involving teachers, students, and scientists working together to create a library of educational resources and services to support Earth system science education. DLESE offers free access to electronic resources including lesson plans, maps, images, data sets, visualizations, and assessment activities. A number of thematic collections have recently been accessioned, which has substantially increased library holdings. Working in concert with GSA, a non-profit organization dedicated to the advancement of the geosciences, small-scale resource creators such as classroom teachers without access to a web server can also share educational resources of their own design. Following a two-step process, lesson plans are submitted to the GSA website, reviewed and posted to the K-12 resource area: http://www.geosociety.org/educate/resources.htm. These resources are also submitted to the DLESE Community Collection using a simple cataloging tool. In this way resources are available to other teachers via the GSA website as well as via the DLESE collection. GSA provides a template for lesson plan developers which assists in providing the necessary information to help users find and understand the intent of the activity when searching in DLESE. This initial effort can serve as a prototype for important services allowing individual community members to contribute their work to DLESE with little technical overhead.

  12. I Was Attacked Online!

    ERIC Educational Resources Information Center

    Hensel, Jan

    1996-01-01

    Considers some of the problems posed by electronic mail systems when the identity of the user is protected by the server. Suggests that high school instructors and administrators have good reason to circulate electronic messages through a central site, as students need to be protected from exploitative users and prohibited from sending abusive…

  13. Computing and Communications Infrastructure for Network-Centric Warfare: Exploiting COTS, Assuring Performance

    DTIC Science & Technology

    2004-06-01

    remote databases, has seen little vendor acceptance. Each database ( Oracle , DB2, MySQL , etc.) has its own client- server protocol. Therefore each...existing standards – SQL , X.500/LDAP, FTP, etc. • View information dissemination as selective replication – State-oriented vs . message-oriented...allowing the 8 application to start. The resource management system would serve as a broker to the resources, making sure that resources are not

  14. Unconditionally verifiable blind quantum computation

    NASA Astrophysics Data System (ADS)

    Fitzsimons, Joseph F.; Kashefi, Elham

    2017-07-01

    Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.

  15. Network information security in a phase III Integrated Academic Information Management System (IAIMS).

    PubMed

    Shea, S; Sengupta, S; Crosswell, A; Clayton, P D

    1992-01-01

    The developing Integrated Academic Information System (IAIMS) at Columbia-Presbyterian Medical Center provides data sharing links between two separate corporate entities, namely Columbia University Medical School and The Presbyterian Hospital, using a network-based architecture. Multiple database servers with heterogeneous user authentication protocols are linked to this network. "One-stop information shopping" implies one log-on procedure per session, not separate log-on and log-off procedures for each server or application used during a session. These circumstances provide challenges at the policy and technical levels to data security at the network level and insuring smooth information access for end users of these network-based services. Five activities being conducted as part of our security project are described: (1) policy development; (2) an authentication server for the network; (3) Kerberos as a tool for providing mutual authentication, encryption, and time stamping of authentication messages; (4) a prototype interface using Kerberos services to authenticate users accessing a network database server; and (5) a Kerberized electronic signature.

  16. Integrating RFID technique to design mobile handheld inventory management system

    NASA Astrophysics Data System (ADS)

    Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung

    2008-04-01

    An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.

  17. APID interactomes: providing proteome-based interactomes with controlled quality for multiple species and derived networks

    PubMed Central

    Alonso-López, Diego; Gutiérrez, Miguel A.; Lopes, Katia P.; Prieto, Carlos; Santamaría, Rodrigo; De Las Rivas, Javier

    2016-01-01

    APID (Agile Protein Interactomes DataServer) is an interactive web server that provides unified generation and delivery of protein interactomes mapped to their respective proteomes. This resource is a new, fully redesigned server that includes a comprehensive collection of protein interactomes for more than 400 organisms (25 of which include more than 500 interactions) produced by the integration of only experimentally validated protein–protein physical interactions. For each protein–protein interaction (PPI) the server includes currently reported information about its experimental validation to allow selection and filtering at different quality levels. As a whole, it provides easy access to the interactomes from specific species and includes a global uniform compendium of 90,379 distinct proteins and 678,441 singular interactions. APID integrates and unifies PPIs from major primary databases of molecular interactions, from other specific repositories and also from experimentally resolved 3D structures of protein complexes where more than two proteins were identified. For this purpose, a collection of 8,388 structures were analyzed to identify specific PPIs. APID also includes a new graph tool (based on Cytoscape.js) for visualization and interactive analyses of PPI networks. The server does not require registration and it is freely available for use at http://apid.dep.usal.es. PMID:27131791

  18. An Internet supported workflow for the publication process in UMVF (French Virtual Medical University).

    PubMed

    Renard, Jean-Marie; Bourde, Annabel; Cuggia, Marc; Garcelon, Nicolas; Souf, Nathalie; Darmoni, Stephan; Beuscart, Régis; Brunetaud, Jean-Marc

    2007-01-01

    The " Université Médicale Virtuelle Francophone" (UMVF) is a federation of French medical schools. Its main goal is to share the production and use of pedagogic medical resources generated by academic medical teachers. We developed an Open-Source application based upon a workflow system, which provides an improved publication process for the UMVF. For teachers, the tool permits easy and efficient upload of new educational resources. For web masters it provides a mechanism to easily locate and validate the resources. For librarian it provide a way to improve the efficiency of indexation. For all, the utility provides a workflow system to control the publication process. On the students side, the application improves the value of the UMVF repository by facilitating the publication of new resources and by providing an easy way to find a detailed description of a resource and to check any resource from the UMVF to ascertain its quality and integrity, even if the resource is an old deprecated version. The server tier of the application is used to implement the main workflow functionalities and is deployed on certified UMVF servers using the PHP language, an LDAP directory and an SQL database. The client tier of the application provides both the workflow and the search and check functionalities. A unique signature for each resource, was needed to provide security functionality and is implemented using a Digest algorithm. The testing performed by Rennes and Lille verified the functionality and conformity with our specifications.

  19. Serverification of Molecular Modeling Applications: The Rosetta Online Server That Includes Everyone (ROSIE)

    PubMed Central

    Conchúir, Shane Ó.; Der, Bryan S.; Drew, Kevin; Kuroda, Daisuke; Xu, Jianqing; Weitzner, Brian D.; Renfrew, P. Douglas; Sripakdeevong, Parin; Borgo, Benjamin; Havranek, James J.; Kuhlman, Brian; Kortemme, Tanja; Bonneau, Richard; Gray, Jeffrey J.; Das, Rhiju

    2013-01-01

    The Rosetta molecular modeling software package provides experimentally tested and rapidly evolving tools for the 3D structure prediction and high-resolution design of proteins, nucleic acids, and a growing number of non-natural polymers. Despite its free availability to academic users and improving documentation, use of Rosetta has largely remained confined to developers and their immediate collaborators due to the code’s difficulty of use, the requirement for large computational resources, and the unavailability of servers for most of the Rosetta applications. Here, we present a unified web framework for Rosetta applications called ROSIE (Rosetta Online Server that Includes Everyone). ROSIE provides (a) a common user interface for Rosetta protocols, (b) a stable application programming interface for developers to add additional protocols, (c) a flexible back-end to allow leveraging of computer cluster resources shared by RosettaCommons member institutions, and (d) centralized administration by the RosettaCommons to ensure continuous maintenance. This paper describes the ROSIE server infrastructure, a step-by-step ‘serverification’ protocol for use by Rosetta developers, and the deployment of the first nine ROSIE applications by six separate developer teams: Docking, RNA de novo, ERRASER, Antibody, Sequence Tolerance, Supercharge, Beta peptide design, NCBB design, and VIP redesign. As illustrated by the number and diversity of these applications, ROSIE offers a general and speedy paradigm for serverification of Rosetta applications that incurs negligible cost to developers and lowers barriers to Rosetta use for the broader biological community. ROSIE is available at http://rosie.rosettacommons.org. PMID:23717507

  20. Blind quantum computation with identity authentication

    NASA Astrophysics Data System (ADS)

    Li, Qin; Li, Zhulin; Chan, Wai Hong; Zhang, Shengyu; Liu, Chengdong

    2018-04-01

    Blind quantum computation (BQC) allows a client with relatively few quantum resources or poor quantum technologies to delegate his computational problem to a quantum server such that the client's input, output, and algorithm are kept private. However, all existing BQC protocols focus on correctness verification of quantum computation but neglect authentication of participants' identity which probably leads to man-in-the-middle attacks or denial-of-service attacks. In this work, we use quantum identification to overcome such two kinds of attack for BQC, which will be called QI-BQC. We propose two QI-BQC protocols based on a typical single-server BQC protocol and a double-server BQC protocol. The two protocols can ensure both data integrity and mutual identification between participants with the help of a third trusted party (TTP). In addition, an unjammable public channel between a client and a server which is indispensable in previous BQC protocols is unnecessary, although it is required between TTP and each participant at some instant. Furthermore, the method to achieve identity verification in the presented protocols is general and it can be applied to other similar BQC protocols.

  1. Pathview Web: user friendly pathway visualization and data integration.

    PubMed

    Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory

    2017-07-03

    Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. The National Institutes of Health Clinical Center Digital Imaging Network, Picture Archival and Communication System, and Radiology Information System.

    PubMed

    Goldszal, A F; Brown, G K; McDonald, H J; Vucich, J J; Staab, E V

    2001-06-01

    In this work, we describe the digital imaging network (DIN), picture archival and communication system (PACS), and radiology information system (RIS) currently being implemented at the Clinical Center, National Institutes of Health (NIH). These systems are presently in clinical operation. The DIN is a redundant meshed network designed to address gigabit density and expected high bandwidth requirements for image transfer and server aggregation. The PACS projected workload is 5.0 TB of new imaging data per year. Its architecture consists of a central, high-throughput Digital Imaging and Communications in Medicine (DICOM) data repository and distributed redundant array of inexpensive disks (RAID) servers employing fiber-channel technology for immediate delivery of imaging data. On demand distribution of images and reports to clinicians and researchers is accomplished via a clustered web server. The RIS follows a client-server model and provides tools to order exams, schedule resources, retrieve and review results, and generate management reports. The RIS-hospital information system (HIS) interfaces include admissions, discharges, and transfers (ATDs)/demographics, orders, appointment notifications, doctors update, and results.

  3. Model of load balancing using reliable algorithm with multi-agent system

    NASA Astrophysics Data System (ADS)

    Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.

    2017-04-01

    Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.

  4. EvoCor: a platform for predicting functionally related genes using phylogenetic and expression profiles.

    PubMed

    Dittmar, W James; McIver, Lauren; Michalak, Pawel; Garner, Harold R; Valdez, Gregorio

    2014-07-01

    The wealth of publicly available gene expression and genomic data provides unique opportunities for computational inference to discover groups of genes that function to control specific cellular processes. Such genes are likely to have co-evolved and be expressed in the same tissues and cells. Unfortunately, the expertise and computational resources required to compare tens of genomes and gene expression data sets make this type of analysis difficult for the average end-user. Here, we describe the implementation of a web server that predicts genes involved in affecting specific cellular processes together with a gene of interest. We termed the server 'EvoCor', to denote that it detects functional relationships among genes through evolutionary analysis and gene expression correlation. This web server integrates profiles of sequence divergence derived by a Hidden Markov Model (HMM) and tissue-wide gene expression patterns to determine putative functional linkages between pairs of genes. This server is easy to use and freely available at http://pilot-hmm.vbi.vt.edu/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. ORCAN-a web-based meta-server for real-time detection and functional annotation of orthologs.

    PubMed

    Zielezinski, Andrzej; Dziubek, Michal; Sliski, Jan; Karlowski, Wojciech M

    2017-04-15

    ORCAN (ORtholog sCANner) is a web-based meta-server for one-click evolutionary and functional annotation of protein sequences. The server combines information from the most popular orthology-prediction resources, including four tools and four online databases. Functional annotation utilizes five additional comparisons between the query and identified homologs, including: sequence similarity, protein domain architectures, functional motifs, Gene Ontology term assignments and a list of associated articles. Furthermore, the server uses a plurality-based rating system to evaluate the orthology relationships and to rank the reference proteins by their evolutionary and functional relevance to the query. Using a dataset of ∼1 million true yeast orthologs as a sample reference set, we show that combining multiple orthology-prediction tools in ORCAN increases the sensitivity and precision by 1-2 percent points. The service is available for free at http://www.combio.pl/orcan/ . wmk@amu.edu.pl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. World wide web implementation of the Langley technical report server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.

    1994-01-01

    On January 14, 1993, NASA Langley Research Center (LaRC) made approximately 130 formal, 'unclassified, unlimited' technical reports available via the anonymous FTP Langley Technical Report Server (LTRS). LaRC was the first organization to provide a significant number of aerospace technical reports for open electronic dissemination. LTRS has been successful in its first 18 months of operation, with over 11,000 reports distributed and has helped lay the foundation for electronic document distribution for NASA. The availability of World Wide Web (WWW) technology has revolutionized the Internet-based information community. This paper describes the transition of LTRS from a centralized FTP site to a distributed data model using the WWW, and suggests how the general model for LTRS can be applied to other similar systems.

  7. [Design and Implementation of a Mobile Operating Room Information Management System Based on Electronic Medical Record].

    PubMed

    Liu, Baozhen; Liu, Zhiguo; Wang, Xianwen

    2015-06-01

    A mobile operating room information management system with electronic medical record (EMR) is designed to improve work efficiency and to enhance the patient information sharing. In the operating room, this system acquires the information from various medical devices through the Client/Server (C/S) pattern, and automatically generates XML-based EMR. Outside the operating room, this system provides information access service by using the Browser/Server (B/S) pattern. Software test shows that this system can correctly collect medical information from equipment and clearly display the real-time waveform. By achieving surgery records with higher quality and sharing the information among mobile medical units, this system can effectively reduce doctors' workload and promote the information construction of the field hospital.

  8. The USGODAE Monterey Data Server

    NASA Astrophysics Data System (ADS)

    Sharfstein, P. J.; Dimitriou, D.; Hankin, S. C.

    2004-12-01

    With oversight from the U.S. Global Ocean Data Assimilation Experiment (GODAE) Steering Committee and funding from the Office of Naval Research, the USGODAE Monterey Data Server has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. Support of the Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Because of the broad range and diverse formats of data used by the GODAE community, presenting data with a consistent interface and ensuring its availability in standard formats is a primary challenge faced by the USGODAE Server project. To this end, all USGODAE data sets are available via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC). To provide surface forcing, fluxes, and boundary conditions for ocean model research, USGODAE serves global data from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and regional data from the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). Global meteorological data and observational data from the FNMOC Ocean QC process are posted in near real-time to USGODAE. These include T/S profiles, in-situ and satellite sea surface temperature (SST), satellite altimetry, and SSM/I sea ice. They contain all of the unclassified in-situ and satellite observations used to initialize the FNMOC NOGAPS model. Also, the Naval Oceanographic Office provides daily satellite SST and SSH retrievals to USGODAE. The USGODAE Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo T/S profiling float data. USGODAE Argo data are served through OPeNDAP and LAS, providing complete integration into NVODS and the IOOS/DMAC. Due to its high reliability, ease of data access, and increasing breadth of data, the USGODAE Server is becoming an invaluable resource for both the GODAE community and the general oceanographic community. Continued integration of model, forcing, and in-situ data sets from providers throughout the world is making the USGODAE Monterey Data Server a key part of the international GODAE project.

  9. Verifier-based three-party authentication schemes using extended chaotic maps for data exchange in telecare medicine information systems.

    PubMed

    Lee, Tian-Fu

    2014-12-01

    Telecare medicine information systems provide a communicating platform for accessing remote medical resources through public networks, and help health care workers and medical personnel to rapidly making correct clinical decisions and treatments. An authentication scheme for data exchange in telecare medicine information systems enables legal users in hospitals and medical institutes to establish a secure channel and exchange electronic medical records or electronic health records securely and efficiently. This investigation develops an efficient and secure verified-based three-party authentication scheme by using extended chaotic maps for data exchange in telecare medicine information systems. The proposed scheme does not require server's public keys and avoids time-consuming modular exponential computations and scalar multiplications on elliptic curve used in previous related approaches. Additionally, the proposed scheme is proven secure in the random oracle model, and realizes the lower bounds of messages and rounds in communications. Compared to related verified-based approaches, the proposed scheme not only possesses higher security, but also has lower computational cost and fewer transmissions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. 78 FR 37998 - Electronic One Touch Bingo System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-25

    ... decision regarding the classification of server based electronic bingo system games that can be played... Class II or Class III game. DATES: The agency must receive comments on or before August 26, 2013... from the regulated community regarding the status of one touch bingo as a Class II or a Class III game...

  11. 76 FR 19285 - Request for Information Regarding Electronic Disclosure by Employee Benefit Plans

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-07

    ..., fiber optic and wireless networks; hardware improvements to servers and personal computers improving... electronic media. For instance, the 2009 U.S. Census Bureau Current Population Survey (Census) found that 76.... \\3\\ The Census information may be found at http://www.census.gov/population/www/socdemo/computer.html...

  12. 21 CFR 1300.03 - Definitions relating to electronic orders for controlled substances and electronic prescriptions...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... records on its servers. Audit trail means a record showing who has accessed an information technology... identity of the user as a prerequisite to allowing access to the information application. Authentication... information in a database. (4) Comparing the biometric data with data contained in one or more reference...

  13. 78 FR 39012 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-28

    ... inquiry by a federal, state, or local government entity or professional licensing authority, in accordance... the Commission's office and electronic records located on the Commission's Server. RETRIEVABILITY...

  14. 12 CFR 1732.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., electronic tapes and back-up tapes, optical discs, CD-ROMS, and DVDs), and voicemail records; (2) Where the information is stored or located, including network servers, desktop or laptop computers and handheld...

  15. Autophagic compound database: A resource connecting autophagy-modulating compounds, their potential targets and relevant diseases.

    PubMed

    Deng, Yiqi; Zhu, Lingjuan; Cai, Haoyang; Wang, Guan; Liu, Bo

    2018-06-01

    Autophagy, a highly conserved lysosomal degradation process in eukaryotic cells, can digest long-lived proteins and damaged organelles through vesicular trafficking pathways. Nowadays, mechanisms of autophagy have been gradually elucidated and thus the discovery of small-molecule drugs targeting autophagy has always been drawing much attention. So far, some autophagy-related web servers have been available online to facilitate scientists to obtain the information relevant to autophagy conveniently, such as HADb, CTLPScanner, iLIR server and ncRDeathDB. However, to the best of our knowledge, there is not any web server available about the autophagy-modulating compounds. According to published articles, all the compounds and their relations with autophagy were anatomized. Subsequently, an online Autophagic Compound Database (ACDB) (http://www.acdbliulab.com/) was constructed, which contained information of 357 compounds with 164 corresponding signalling pathways and potential targets in different diseases. We achieved a great deal of information of autophagy-modulating compounds, including compounds, targets/pathways and diseases. ACDB is a valuable resource for users to access to more than 300 curated small-molecule compounds correlated with autophagy. Autophagic compound database will facilitate to the discovery of more novel therapeutic drugs in the near future. © 2017 John Wiley & Sons Ltd.

  16. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    PubMed

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  17. Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration

    PubMed Central

    Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun

    2017-01-01

    Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. PMID:27733503

  18. A new reference implementation of the PSICQUIC web service.

    PubMed

    del-Toro, Noemi; Dumousseau, Marine; Orchard, Sandra; Jimenez, Rafael C; Galeota, Eugenia; Launay, Guillaume; Goll, Johannes; Breuer, Karin; Ono, Keiichiro; Salwinski, Lukasz; Hermjakob, Henning

    2013-07-01

    The Proteomics Standard Initiative Common QUery InterfaCe (PSICQUIC) specification was created by the Human Proteome Organization Proteomics Standards Initiative (HUPO-PSI) to enable computational access to molecular-interaction data resources by means of a standard Web Service and query language. Currently providing >150 million binary interaction evidences from 28 servers globally, the PSICQUIC interface allows the concurrent search of multiple molecular-interaction information resources using a single query. Here, we present an extension of the PSICQUIC specification (version 1.3), which has been released to be compliant with the enhanced standards in molecular interactions. The new release also includes a new reference implementation of the PSICQUIC server available to the data providers. It offers augmented web service capabilities and improves the user experience. PSICQUIC has been running for almost 5 years, with a user base growing from only 4 data providers to 28 (April 2013) allowing access to 151 310 109 binary interactions. The power of this web service is shown in PSICQUIC View web application, an example of how to simultaneously query, browse and download results from the different PSICQUIC servers. This application is free and open to all users with no login requirement (http://www.ebi.ac.uk/Tools/webservices/psicquic/view/main.xhtml).

  19. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks.

    PubMed

    Jin, Wenquan; Kim, DoHyeun

    2018-05-26

    The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.

  20. Exploiting geo-distributed clouds for a e-health monitoring system with minimum service delay and privacy preservation.

    PubMed

    Shen, Qinghua; Liang, Xiaohui; Shen, Xuemin; Lin, Xiaodong; Luo, Henry Y

    2014-03-01

    In this paper, we propose an e-health monitoring system with minimum service delay and privacy preservation by exploiting geo-distributed clouds. In the system, the resource allocation scheme enables the distributed cloud servers to cooperatively assign the servers to the requested users under the load balance condition. Thus, the service delay for users is minimized. In addition, a traffic-shaping algorithm is proposed. The traffic-shaping algorithm converts the user health data traffic to the nonhealth data traffic such that the capability of traffic analysis attacks is largely reduced. Through the numerical analysis, we show the efficiency of the proposed traffic-shaping algorithm in terms of service delay and privacy preservation. Furthermore, through the simulations, we demonstrate that the proposed resource allocation scheme significantly reduces the service delay compared to two other alternatives using jointly the short queue and distributed control law.

  1. TRSkit: A Simple Digital Library Toolkit

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Esler, Sandra L.

    1997-01-01

    This paper introduces TRSkit, a simple and effective toolkit for building digital libraries on the World Wide Web. The toolkit was developed for the creation of the Langley Technical Report Server and the NASA Technical Report Server, but is applicable to most simple distribution paradigms. TRSkit contains a handful of freely available software components designed to be run under the UNIX operating system and served via the World Wide Web. The intended customer is the person that must continuously and synchronously distribute anywhere from 100 - 100,000's of information units and does not have extensive resources to devote to the problem.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritz, David J.; Harrison, Christopher B.; Perr, C. W.

    Choreographer is a "moving target defense system", designed to protect against attacks aimed at IP addresses without corresponding domain name system (DNS) lookups. It coordinates actions between a DNS server and a Network Address Translation (NAT) device to regularly change which publicly available IP addresses' traffic will be routed to the protected device versus routed to a honeypot. More details about how Choreographer operates can be found in Section 2: Introducing Choreographer. Operational considerations for the successful deployment of Choreographer can be found in Section 3. The Testing & Evaluation (T&E) for Choreographer involved 3 phases: Pre-testing, Code Analysis, andmore » Operational Testing. Pre-testing, described in Section 4, involved installing and configuring an instance of Choreographer and verifying it would operate as expected for a simple use case. Our findings were that it was simple and straightforward to prepare a system for a Choreographer installation as well as configure Choreographer to work in a representative environment. Code Analysis, described in Section 5, consisted of running a static code analyzer (HP Fortify) and conducting dynamic analysis tests using the Valgrind instrumentation framework. Choreographer performed well, such that only a few errors that might possibly be problematic in a given operating situation were identified. Operational Testing, described in Section 6, involved operating Choreographer in a representative environment created through Emulytics TM . Depending upon the amount of server resources dedicated to Choreographer vis-á-vis the amount of client traffic handled, Choreographer had varying degrees of operational success. In an environment with a poorly resourced Choreographer server and as few as 50-100 clients, Choreographer failed to properly route traffic over half the time. Yet, with a well-resourced server, Choreographer handled over 1000 clients without missrouting. Choreographer demonstrated sensitivity to low-latency connections as well as high volumes of traffic. In addition, depending upon the frequency of new connection requests and the size of the address range that Choreographer has to work with, it is possible for all benefits of Choreographer to be ameliorated by its need to allow DNS servers rather than the end client to make DNS requests. Conclusions and Recommendations, listed in Section 7, address the need to understand the specific use case where Choreographer would be deployed to assess whether there would be problems resulting from the operational considerations described in Section 3 or performance concerns from the results of Operational Testing in Section 6. Deployed in an appropriate architecture with sufficiently light traffic volumes and a well-provisioned server, it is quite likely that Choreographer would perform satisfactorily. Thus, we recommend further detailed testing, to potentially include Red Team testing, at such time a specific use case is identified« less

  3. SEGEL: A Web Server for Visualization of Smoking Effects on Human Lung Gene Expression.

    PubMed

    Xu, Yan; Hu, Brian; Alnajm, Sammy S; Lu, Yin; Huang, Yangxin; Allen-Gipson, Diane; Cheng, Feng

    2015-01-01

    Cigarette smoking is a major cause of death worldwide resulting in over six million deaths per year. Cigarette smoke contains complex mixtures of chemicals that are harmful to nearly all organs of the human body, especially the lungs. Cigarette smoking is considered the major risk factor for many lung diseases, particularly chronic obstructive pulmonary diseases (COPD) and lung cancer. However, the underlying molecular mechanisms of smoking-induced lung injury associated with these lung diseases still remain largely unknown. Expression microarray techniques have been widely applied to detect the effects of smoking on gene expression in different human cells in the lungs. These projects have provided a lot of useful information for researchers to understand the potential molecular mechanism(s) of smoke-induced pathogenesis. However, a user-friendly web server that would allow scientists to fast query these data sets and compare the smoking effects on gene expression across different cells had not yet been established. For that reason, we have integrated eight public expression microarray data sets from trachea epithelial cells, large airway epithelial cells, small airway epithelial cells, and alveolar macrophage into an online web server called SEGEL (Smoking Effects on Gene Expression of Lung). Users can query gene expression patterns across these cells from smokers and nonsmokers by gene symbols, and find the effects of smoking on the gene expression of lungs from this web server. Sex difference in response to smoking is also shown. The relationship between the gene expression and cigarette smoking consumption were calculated and are shown in the server. The current version of SEGEL web server contains 42,400 annotated gene probe sets represented on the Affymetrix Human Genome U133 Plus 2.0 platform. SEGEL will be an invaluable resource for researchers interested in the effects of smoking on gene expression in the lungs. The server also provides useful information for drug development against smoking-related diseases. The SEGEL web server is available online at http://www.chengfeng.info/smoking_database.html.

  4. An Improved Publication Process for the UMVF.

    PubMed

    Renard, Jean-Marie; Brunetaud, Jean-Marc; Cuggia, Marc; Darmoni, Stephan; Lebeux, Pierre; Beuscart, Régis

    2005-01-01

    The "Université Médicale Virtuelle Francophone" (UMVF) is a federation of French medical schools. Its main goal is to share the production and use of pedagogic medical resources generated by academic medical teachers. We developed an Open-Source application based upon a workflow system which provides an improved publication process for the UMVF. For teachers, the tool permits easy and efficient upload of new educational resources. For web masters it provides a mechanism to easily locate and validate the resources. For both the teachers and the web masters, the utility provides the control and communication functions that define a workflow system.For all users, students in particular, the application improves the value of the UMVF repository by providing an easy way to find a detailed description of a resource and to check any resource from the UMVF to ascertain its quality and integrity, even if the resource is an old deprecated version. The server tier of the application is used to implement the main workflow functionalities and is deployed on certified UMVF servers using the PHP language, an LDAP directory and an SQL database. The client tier of the application provides both the workflow and the search and check functionalities and is implemented using a Java applet through a W3C compliant web browser. A unique signature for each resource, was needed to provide security functionality and is implemented using the MD5 Digest algorithm. The testing performed by Rennes and Lille verified the functionality and conformity with our specifications.

  5. Exploiting volatile opportunistic computing resources with Lobster

    NASA Astrophysics Data System (ADS)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  6. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud

    PubMed Central

    Karimi, Kamran; Vize, Peter D.

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782

  7. Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers

    NASA Technical Reports Server (NTRS)

    Tumer, K.; Lawson, J.

    2003-01-01

    Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.

  8. The design and implementation of web mining in web sites security

    NASA Astrophysics Data System (ADS)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  9. [The Development of Information Centralization and Management Integration System for Monitors Based on Wireless Sensor Network].

    PubMed

    Xu, Xiu; Zhang, Honglei; Li, Yiming; Li, Bin

    2015-07-01

    Developed the information centralization and management integration system for monitors of different brands and models with wireless sensor network technologies such as wireless location and wireless communication, based on the existing wireless network. With adaptive implementation and low cost, the system which possesses the advantages of real-time, efficiency and elaboration is able to collect status and data of the monitors, locate the monitors, and provide services with web server, video server and locating server via local network. Using an intranet computer, the clinical and device management staffs can access the status and parameters of monitors. Applications of this system provide convenience and save human resource for clinical departments, as well as promote the efficiency, accuracy and elaboration for the device management. The successful achievement of this system provides solution for integrated and elaborated management of the mobile devices including ventilator and infusion pump.

  10. Realizing the Potential of Information Resources: Information, Technology, and Services. Proceedings of the CAUSE Annual Conference (New Orleans, Louisiana, November 28-December 3, 1995).

    ERIC Educational Resources Information Center

    CAUSE, Boulder, CO.

    This document presents the proceedings of a conference on managing and using information technology in higher education in regard to client/server computing, network delivery, process reengineering, leveraging of resources, and professional development. Eight tracks, with eight papers in each track, addressed the themes of: (1) strategic planning;…

  11. The Resource Manager the ATLAS Trigger and Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Aleksandrov, I.; Avolio, G.; Lehmann Miotto, G.; Soloviev, I.

    2017-10-01

    The Resource Manager is one of the core components of the Data Acquisition system of the ATLAS experiment at the LHC. The Resource Manager marshals the right for applications to access resources which may exist in multiple but limited copies, in order to avoid conflicts due to program faults or operator errors. The access to resources is managed in a manner similar to what a lock manager would do in other software systems. All the available resources and their association to software processes are described in the Data Acquisition configuration database. The Resource Manager is queried about the availability of resources every time an application needs to be started. The Resource Manager’s design is based on a client-server model, hence it consists of two components: the Resource Manager “server” application and the “client” shared library. The Resource Manager server implements all the needed functionalities, while the Resource Manager client library provides remote access to the “server” (i.e., to allocate and free resources, to query about the status of resources). During the LHC’s Long Shutdown period, the Resource Manager’s requirements have been reviewed at the light of the experience gained during the LHC’s Run 1. As a consequence, the Resource Manager has undergone a full re-design and re-implementation cycle with the result of a reduction of the code base by 40% with respect to the previous implementation. This contribution will focus on the way the design and the implementation of the Resource Manager could leverage the new features available in the C++11 standard, and how the introduction of external libraries (like Boost multi-container) led to a more maintainable system. Additionally, particular attention will be given to the technical solutions adopted to ensure the Resource Manager could effort the typical requests rates of the Data Acquisition system, which is about 30000 requests in a time window of few seconds coming from more than 1000 clients.

  12. A multipurpose computing center with distributed resources

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  13. 25 CFR 543.2 - What are the definitions for this part?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... system, including an electronic or technological aid (not limited to terminals, player stations... that are effectively random. Server. A computer which controls one or more applications or environments...

  14. 25 CFR 543.2 - What are the definitions for this part?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... system, including an electronic or technological aid (not limited to terminals, player stations... that are effectively random. Server. A computer which controls one or more applications or environments...

  15. Internet Distribution of Spacecraft Telemetry Data

    NASA Technical Reports Server (NTRS)

    Specht, Ted; Noble, David

    2006-01-01

    Remote Access Multi-mission Processing and Analysis Ground Environment (RAMPAGE) is a Java-language server computer program that enables near-real-time display of spacecraft telemetry data on any authorized client computer that has access to the Internet and is equipped with Web-browser software. In addition to providing a variety of displays of the latest available telemetry data, RAMPAGE can deliver notification of an alarm by electronic mail. Subscribers can then use RAMPAGE displays to determine the state of the spacecraft and formulate a response to the alarm, if necessary. A user can query spacecraft mission data in either binary or comma-separated-value format by use of a Web form or a Practical Extraction and Reporting Language (PERL) script to automate the query process. RAMPAGE runs on Linux and Solaris server computers in the Ground Data System (GDS) of NASA's Jet Propulsion Laboratory and includes components designed specifically to make it compatible with legacy GDS software. The client/server architecture of RAMPAGE and the use of the Java programming language make it possible to utilize a variety of competitive server and client computers, thereby also helping to minimize costs.

  16. ARIANE: integration of information databases within a hospital intranet.

    PubMed

    Joubert, M; Aymard, S; Fieschi, D; Volot, F; Staccini, P; Robert, J J; Fieschi, M

    1998-05-01

    Large information systems handle massive volume of data stored in heterogeneous sources. Each server has its own model of representation of concepts with regard to its aims. One of the main problems end-users encounter when accessing different servers is to match their own viewpoint on biomedical concepts with the various representations that are made in the databases servers. The aim of the project ARIANE is to provide end-users with easy-to-use and natural means to access and query heterogeneous information databases. The objectives of this research work consist in building a conceptual interface by means of the Internet technology inside an enterprise Intranet and to propose a method to realize it. This method is based on the knowledge sources provided by the Unified Medical Language System (UMLS) project of the US National Library of Medicine. Experiments concern queries to three different information servers: PubMed, a Medline server of the NLM; Thériaque, a French database on drugs implemented in the Hospital Intranet; and a Web site dedicated to Internet resources in gastroenterology and nutrition, located at the Faculty of Medicine of Nice (France). Accessing to each of these servers is different according to the kind of information delivered and according to the technology used to query it. Dealing with health care professional workstation, the authors introduced in the ARIANE project quality criteria in order to attempt a homogeneous and efficient way to build a query system able to be integrated in existing information systems and to integrate existing and new information sources.

  17. The Electronic Library: The Student/Scholar Workstation, CD-ROM and Hypertext.

    ERIC Educational Resources Information Center

    Triebwasser, Marc A.

    Predicting that a large component of the library of the not so distant future will be an electronic network of file servers where information is stored for access by personal computer workstations in remote locations as well as the library, this paper discusses innovative computer technologies--particularly CD-ROM (Compact Disk-Read Only Memory)…

  18. Identifying patients for clinical trials using fuzzy ternary logic expressions on HL7 messages.

    PubMed

    Majeed, Raphael W; Röhrig, Rainer

    2011-01-01

    Identifying eligible patients is one of the most critical parts of any clinical trial. The process of recruiting patients for the third phase of any clinical trial is usually done manually, informing relevant physicians or putting notes on bulletin boards. While most necessary information is already available in electronic hospital information systems, required data still has to be looked up individually. Most university hospitals make use of a dedicated communication server to distribute information from independent information systems, e.g. laboratory information systems, electronic health records, surgery planning systems. Thus, a theoretical model is developed to formally describe inclusion and exclusion criteria for each clinical trial using a fuzzy ternary logic expression. These expressions will then be used to process HL7 messages from a communication server in order to identify eligible patients.

  19. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1992-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.

  20. Docker Container Manager: A Simple Toolkit for Isolated Work with Shared Computational, Storage, and Network Resources

    NASA Astrophysics Data System (ADS)

    Polyakov, S. P.; Kryukov, A. P.; Demichev, A. P.

    2018-01-01

    We present a simple set of command line interface tools called Docker Container Manager (DCM) that allow users to create and manage Docker containers with preconfigured SSH access while keeping the users isolated from each other and restricting their access to the Docker features that could potentially disrupt the work of the server. Users can access DCM server via SSH and are automatically redirected to DCM interface tool. From there, they can create new containers, stop, restart, pause, unpause, and remove containers and view the status of the existing containers. By default, the containers are also accessible via SSH using the same private key(s) but through different server ports. Additional publicly available ports can be mapped to the respective ports of a container, allowing for some network services to be run within it. The containers are started from read-only filesystem images. Some initial images must be provided by the DCM server administrators, and after containers are configured to meet one’s needs, the changes can be saved as new images. Users can see the available images and remove their own images. DCM server administrators are provided with commands to create and delete users. All commands were implemented as Python scripts. The tools allow to deploy and debug medium-sized distributed systems for simulation in different fields on one or several local computers.

  1. Privacy-preserving public auditing for data integrity in cloud

    NASA Astrophysics Data System (ADS)

    Shaik Saleem, M.; Murali, M.

    2018-04-01

    Cloud computing which has collected extent concentration from communities of research and with industry research development, a large pool of computing resources using virtualized sharing method like storage, processing power, applications and services. The users of cloud are vend with on demand resources as they want in the cloud computing. Outsourced file of the cloud user can easily tampered as it is stored at the third party service providers databases, so there is no integrity of cloud users data as it has no control on their data, therefore providing security assurance to the users data has become one of the primary concern for the cloud service providers. Cloud servers are not responsible for any data loss as it doesn’t provide the security assurance to the cloud user data. Remote data integrity checking (RDIC) licenses an information to data storage server, to determine that it is really storing an owners data truthfully. RDIC is composed of security model and ID-based RDIC where it is responsible for the security of every server and make sure the data privacy of cloud user against the third party verifier. Generally, by running a two-party Remote data integrity checking (RDIC) protocol the clients would themselves be able to check the information trustworthiness of their cloud. Within the two party scenario the verifying result is given either from the information holder or the cloud server may be considered as one-sided. Public verifiability feature of RDIC gives the privilege to all its users to verify whether the original data is modified or not. To ensure the transparency of the publicly verifiable RDIC protocols, Let’s figure out there exists a TPA who is having knowledge and efficiency to verify the work to provide the condition clearly by publicly verifiable RDIC protocols.

  2. Opal web services for biomedical applications.

    PubMed

    Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W

    2010-07-01

    Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.

  3. The software analysis project for the Office of Human Resources

    NASA Technical Reports Server (NTRS)

    Tureman, Robert L., Jr.

    1994-01-01

    There were two major sections of the project for the Office of Human Resources (OHR). The first section was to conduct a planning study to analyze software use with the goal of recommending software purchases and determining whether the need exists for a file server. The second section was analysis and distribution planning for retirement planning computer program entitled VISION provided by NASA Headquarters. The software planning study was developed to help OHR analyze the current administrative desktop computing environment and make decisions regarding software acquisition and implementation. There were three major areas addressed by the study: current environment new software requirements, and strategies regarding the implementation of a server in the Office. To gather data on current environment, employees were surveyed and an inventory of computers were produced. The surveys were compiled and analyzed by the ASEE fellow with interpretation help by OHR staff. New software requirements represented a compilation and analysis of the surveyed requests of OHR personnel. Finally, the information on the use of a server represents research done by the ASEE fellow and analysis of survey data to determine software requirements for a server. This included selection of a methodology to estimate the number of copies of each software program required given current use and estimated growth. The report presents the results of the computing survey, a description of the current computing environment, recommenations for changes in the computing environment, current software needs, management advantages of using a server, and management considerations in the implementation of a server. In addition, detailed specifications were presented for the hardware and software recommendations to offer a complete picture to OHR management. The retirement planning computer program available to NASA employees will aid in long-range retirement planning. The intended audience is the NASA civil service employee with several years until retirement. The employee enters current salary and savings information as well as goals concerning salary at retirement, assumptions on inflation, and the return on investments. The program produces a picture of the employee's retirement income from all sources based on the assumptions entered. A session showing features of the program was conducted for key personnel at the Center. After analysis, it was decided to offer the program through the Learning Center starting in August 1994.

  4. Wireless Acoustic Measurement System

    NASA Technical Reports Server (NTRS)

    Anderson, Paul D.; Dorland, Wade D.; Jolly, Ronald L.

    2007-01-01

    A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/ Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in the article on page 8. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro- ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1.3-octave spectrograms.

  5. Wireless Acoustic Measurement System

    NASA Technical Reports Server (NTRS)

    Anderson, Paul D.; Dorland, Wade D.

    2005-01-01

    A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in "Predicting Rocket or Jet Noise in Real Time" (SSC-00215-1), which appears elsewhere in this issue of NASA Tech Briefs. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro-ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1/3-octave spectrograms.

  6. Computing at h1 - Experience and Future

    NASA Astrophysics Data System (ADS)

    Eckerlin, G.; Gerhards, R.; Kleinwort, C.; KrÜNer-Marquis, U.; Egli, S.; Niebergall, F.

    The H1 experiment has now been successfully operating at the electron proton collider HERA at DESY for three years. During this time the computing environment has gradually shifted from a mainframe oriented environment to the distributed server/client Unix world. This transition is now almost complete. Computing needs are largely determined by the present amount of 1.5 TB of reconstructed data per year (1994), corresponding to 1.2 × 107 accepted events. All data are centrally available at DESY. In addition to data analysis, which is done in all collaborating institutes, most of the centrally organized Monte Carlo production is performed outside of DESY. New software tools to cope with offline computing needs include CENTIPEDE, a tool for the use of distributed batch and interactive resources for Monte Carlo production, and H1 UNIX, a software package for automatic updates of H1 software on all UNIX platforms.

  7. A Satellite Data-Driven, Client-Server Decision Support Application for Agricultural Water Resources Management

    NASA Technical Reports Server (NTRS)

    Johnson, Lee F.; Maneta, Marco P.; Kimball, John S.

    2016-01-01

    Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in a typical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight 'app' that connects to the server to retrieve the latest information regarding water demands, land use, yields and hydrologic information required to run different management scenarios. Furthermore, this architecture ensures all agencies and teams involved in water management use the same, up-to-date information in their simulations.

  8. A satellite data-driven, client-server decision support application for agricultural water resources management

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Johnson, L.; Kimball, J. S.

    2016-12-01

    Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in atypical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight `app` that connects to the server to retrieve the latest information regarding water demands, land use, yields and hydrologic information required to run different management scenarios. Furthermore, this architecture ensures all agencies and teams involved in water management use the same, up-to-date information in their simulations.

  9. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted files, or the addition of new or the deletion of old data products. Next, ADAPT routines analyzed the query results and issued updates to the metadata stored in the UCLA CDAWEB and SPDF metadata registries. In this way, the SPASE metadata registries generated by ADAPT can be relied on to provide up to date and complete access to Heliophysics CDF data resources on a daily basis.

  10. An Introduction to MAMA (Meta-Analysis of MicroArray data) System.

    PubMed

    Zhang, Zhe; Fenstermacher, David

    2005-01-01

    Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.

  11. Fulfillment of HTTP Authentication Based on Alcatel OmniSwitch 9700

    NASA Astrophysics Data System (ADS)

    Liu, Hefu

    This paper provides a way of HTTP authentication On Alcatel OmniSwitch 9700. Authenticated VLANs control user access to network resources based on VLAN assignment and user authentication. The user can be authenticated through the switch via any standard Web browser software. Web browser client displays the username and password prompts. Then a way for HTML forms can be given to pass HTTP authentication data when it's submitted. A radius server will provide a database of user information that the switch checks whenever it tries to authenticate through the switch. Before or after authentication, the client can get an address from a Dhcp server.

  12. Undergraduates Improve upon Published Crystal Structure in Class Assignment

    ERIC Educational Resources Information Center

    Horowitz, Scott; Koldewey, Philipp; Bardwell, James C.

    2014-01-01

    Recently, 57 undergraduate students at the University of Michigan were assigned the task of solving a crystal structure, given only the electron density map of a 1.3 Å crystal structure from the electron density server, and the position of the N-terminal amino acid. To test their knowledge of amino acid chemistry, the students were not given the…

  13. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    PubMed Central

    2012-01-01

    Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423

  14. Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration.

    PubMed

    Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun

    2017-01-04

    Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. MetNetAPI: A flexible method to access and manipulate biological network data from MetNet

    PubMed Central

    2010-01-01

    Background Convenient programmatic access to different biological databases allows automated integration of scientific knowledge. Many databases support a function to download files or data snapshots, or a webservice that offers "live" data. However, the functionality that a database offers cannot be represented in a static data download file, and webservices may consume considerable computational resources from the host server. Results MetNetAPI is a versatile Application Programming Interface (API) to the MetNetDB database. It abstracts, captures and retains operations away from a biological network repository and website. A range of database functions, previously only available online, can be immediately (and independently from the website) applied to a dataset of interest. Data is available in four layers: molecular entities, localized entities (linked to a specific organelle), interactions, and pathways. Navigation between these layers is intuitive (e.g. one can request the molecular entities in a pathway, as well as request in what pathways a specific entity participates). Data retrieval can be customized: Network objects allow the construction of new and integration of existing pathways and interactions, which can be uploaded back to our server. In contrast to webservices, the computational demand on the host server is limited to processing data-related queries only. Conclusions An API provides several advantages to a systems biology software platform. MetNetAPI illustrates an interface with a central repository of data that represents the complex interrelationships of a metabolic and regulatory network. As an alternative to data-dumps and webservices, it allows access to a current and "live" database and exposes analytical functions to application developers. Yet it only requires limited resources on the server-side (thin server/fat client setup). The API is available for Java, Microsoft.NET and R programming environments and offers flexible query and broad data- retrieval methods. Data retrieval can be customized to client needs and the API offers a framework to construct and manipulate user-defined networks. The design principles can be used as a template to build programmable interfaces for other biological databases. The API software and tutorials are available at http://www.metnetonline.org/api. PMID:21083943

  16. Unidata Cyberinfrastructure in the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Young, J. W.

    2016-12-01

    Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.

  17. Remotely accessible laboratory for MEMS testing

    NASA Astrophysics Data System (ADS)

    Sivakumar, Ganapathy; Mulsow, Matthew; Melinger, Aaron; Lacouture, Shelby; Dallas, Tim E.

    2010-02-01

    We report on the construction of a remotely accessible and interactive laboratory for testing microdevices (aka: MicroElectroMechancial Systems - MEMS). Enabling expanded utilization of microdevices for research, commercial, and educational purposes is very important for driving the creation of future MEMS devices and applications. Unfortunately, the relatively high costs associated with MEMS devices and testing infrastructure makes widespread access to the world of MEMS difficult. The creation of a virtual lab to control and actuate MEMS devices over the internet helps spread knowledge to a larger audience. A host laboratory has been established that contains a digital microscope, microdevices, controllers, and computers that can be logged into through the internet. The overall layout of the tele-operated MEMS laboratory system can be divided into two major parts: the server side and the client side. The server-side is present at Texas Tech University, and hosts a server machine that runs the Linux operating system and is used for interfacing the MEMS lab with the outside world via internet. The controls from the clients are transferred to the lab side through the server interface. The server interacts with the electronics required to drive the MEMS devices using a range of National Instruments hardware and LabView Virtual Instruments. An optical microscope (100 ×) with a CCD video camera is used to capture images of the operating MEMS. The server broadcasts the live video stream over the internet to the clients through the website. When the button is pressed on the website, the MEMS device responds and the video stream shows the movement in close to real time.

  18. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package

    PubMed Central

    2012-01-01

    Background Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. Results In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Conclusions Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org. PMID:23281941

  19. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package.

    PubMed

    El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter

    2012-01-01

    Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.

  20. Implementation of Medical Information Exchange System Based on EHR Standard

    PubMed Central

    Han, Soon Hwa; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong

    2010-01-01

    Objectives To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. Methods To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. Results The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. Conclusions This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information. PMID:21818447

  1. Implementation of Medical Information Exchange System Based on EHR Standard.

    PubMed

    Han, Soon Hwa; Lee, Min Ho; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong

    2010-12-01

    To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information.

  2. Secure electronic commerce communication system based on CA

    NASA Astrophysics Data System (ADS)

    Chen, Deyun; Zhang, Junfeng; Pei, Shujun

    2001-07-01

    In this paper, we introduce the situation of electronic commercial security, then we analyze the working process and security for SSL protocol. At last, we propose a secure electronic commerce communication system based on CA. The system provide secure services such as encryption, integer, peer authentication and non-repudiation for application layer communication software of browser clients' and web server. The system can implement automatic allocation and united management of key through setting up the CA in the network.

  3. WMT: The CSDMS Web Modeling Tool

    NASA Astrophysics Data System (ADS)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.

  4. creation of unificed geoinformation system for monitoring social and economic developments of departamento quindio (colombia) on the basis of the isolated data sources

    NASA Astrophysics Data System (ADS)

    Buravlev, V.; Sereshnikov, S. V.; Mayorov, A. A.; Vila, J. J.

    At each level of the state and municipal management the information resources which provide the support of acceptance of administrative decisions, usually are performed as a number of polytypic, untied among themselves electronic data sources, such as databases, geoinformation projects, electronic archives of documents, etc. These sources are located in the various organizations, they function in various programs, and are actualized according to various rules. Creation on the basis of such isolated sources of the uniform information systems which provide an opportunity to look through and analyze any information stored in these sources in real time mode, will help to promote an increase in a degree of adequacy of accepted administrative decisions. The Distributed Data Service technology - TrisoftDDS, developed by company Trisoft, Ltd, provides the construction of horizontal territorially distributed heterogeneous information systems (TeRGIS). Technology TrisoftDDS allows the quickly creation and support, easy modification of systems, the data sources for which are already existing information complexes, without any working capacity infringements of the last ones, and provides the remote regulated multi-user access to the different types of data sources by the Internet/Intranet. Relational databases, GIS projects, files of various types (documents MS Office, images, html documents, etc.) can be used as data sources in TeRGIS. TeRGIS is created as Internet/Intranet application representing three-level client-server system. Access to the information in existing data sources is carried out by means of the distributed DDS data service, the nucleus of which is the distributed data service server - the DSServer, settling down on an intermediate level. TrisoftDDS Technology includes the following components: Client DSBrowser (Data Service Browser) - the client application connected through the Internet/intranet to the DSServer and provides both - a choice and viewing of documents. Tables of databases, inquiries to databases, inquiries to geoinformation projects, files of various types (documents MS Office, images, html files, etc.) can act as documents. For work with complex data sources the DSBrowser gives an opportunity to create inquiries, to execute data view and filter. Server of the distributed data service - DSServer (Data Service Server) - the web-application that provides the access to the data sources and performance of the client's inquiries on granting of chosen documents. Tool means - Toolkit DDS: the Manager of catalogue - the DCMan (Data Catalog Manager) - - the client-server application intended for the organization and administration of the data catalogue. Documentator - the DSDoc (Data Source Documentor) - the client-server application intended for documenting the procedure of formation of the required document from the data source. The documentation, created by the DBDoc, represents the metadata tables, which are included in the data catalogue with the help of the catalogue manager - the DSCMan. The functioning logic of territorially distributed heterogeneous information system, based on DDS technology, is following: Client application - DSBrowser addresses to the DSServer on specified Internet address. In reply to the reference the DSServer sends the client the catalogue of the system's information resources. The catalogue represents the xml-document which is processed by the client's browser and is deduced as tree - structure in a special window. The user of the application looks through the list and chooses necessary documents, the DSBrowser sends corresponding inquiry to the DSServer. The DSServer, in its turn, addresses to the metadata tables, which describe the document, chosen by user, and broadcasts inquiry to the corresponding data source and after this returns to the client application the result of the inquiry. The catalogue of the data services contains the full Internet address of the document. This allows to create catalogues of the distributed information resources, separate parts of which (documents) can be located on different servers in various places of Internet. Catalogues, thus, can separately settle down at anyone Internet provider, which supports the necessary software. Lists of documents in the catalogue gather in the thematic blocks, allowing to organize user-friendly navigation down the information sources of the system. The TrisoftDDS technology perspectives, first of all, consist of the organization and the functionality of the distributed data service which process inquiries about granting of documents. The distributed data service allows to hide the complex and, in most cases, not necessary features of structure of complex data sources and ways of connection to them from the external user. Instead of this, user receives pseudonyms of connections and file directories, the real parameters of which are stored in the register of the web-server, which hosts the DSServer. Such scheme gives also wide opportunities of the data protection and differentiations of access rights to the information. The technology of creation of horizontal territory distributed geoinformation systems with the purpose of the territorial social and economic development level classification of Quindio Departamento (Columbia) is also given in this work. This technology includes the creation of thematic maps on the base of ESRI software products - Arcview and Erdas. It also shows and offer some ways of regional social and economic development conditions analysis for comparison of optimality of the decision. This technology includes the following parameters: dynamics of demographic processes; education; health and a feed; infrastructure; political and social stability; culture, social and family values; condition of an environment; political and civil institutes; profitableness of the population; unemployment, use of a labour; poverty and not equality. The methodology allows to include other parameters with the help of an expert estimations method and optimization theories and there is also a module for the forecast check by field checks on district.

  5. A quantitative model of application slow-down in multi-resource shared systems

    DOE PAGES

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less

  6. A quantitative model of application slow-down in multi-resource shared systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Kim, Youngjae

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less

  7. Infrastructure of electronic information management

    USGS Publications Warehouse

    Twitchell, G.D.

    2004-01-01

    The information technology infrastructure of an organization, whether it is a private, non-profit, federal, or academic institution, is key to delivering timely and high-quality products and services to its customers and stakeholders. With the evolution of the Internet and the World Wide Web, resources that were once "centralized" in nature are now distributed across the organization in various locations and often remote regions of the country. This presents tremendous challenges to the information technology managers, users, and CEOs of large world-wide corporations who wish to exchange information or get access to resources in today's global marketplace. Several tools and technologies have been developed over recent years that play critical roles in ensuring that the proper information infrastructure exists within the organization to facilitate this global information marketplace Such tools and technologies as JAVA, Proxy Servers, Virtual Private Networks (VPN), multi-platform database management solutions, high-speed telecommunication technologies (ATM, ISDN, etc.), mass storage devices, and firewall technologies most often determine the organization's success through effective and efficient information infrastructure practices. This session will address several of these technologies and provide options related to those that may exist and can be readily applied within Eastern Europe. ?? 2004 - IOS Press and the authors. All rights reserved.

  8. The EarthServer project: Exploiting Identity Federations, Science Gateways and Social and Mobile Clients for Big Earth Data Analysis

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Messina, Antonio; Pappalardo, Marco; Passaro, Gianluca

    2013-04-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. Six Lighthouse Applications are being established in EarthServer, each of which poses distinct challenges on Earth Data Analytics: Cryospheric Science, Airborne Science, Atmospheric Science, Geology, Oceanography, and Planetary Science. Altogether, they cover all Earth Science domains; the Planetary Science use case has been added to challenge concepts and standards in non-standard environments. In addition, EarthLook (maintained by Jacobs University) showcases use of OGC standards in 1D through 5D use cases. In this contribution we will report on the first applications integrated in the EarthServer Science Gateway and on the clients for mobile appliances developed to access them. We will also show how federated and social identity services can allow Big Earth Data Providers to expose their data in a distributed environment keeping a strict and fine-grained control on user authentication and authorisation. The degree of fulfilment of the EarthServer implementation with the recommendations made in the recent TERENA Study on AAA Platforms For Scientific Resources in Europe (https://confluence.terena.org/display/aaastudy/AAA+Study+Home+Page) will also be assessed.

  9. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.

    PubMed

    Karimi, Kamran; Vize, Peter D

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.

  10. Public Auditing with Privacy Protection in a Multi-User Model of Cloud-Assisted Body Sensor Networks

    PubMed Central

    Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu

    2017-01-01

    Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users’ medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs’ applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs’ deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models. PMID:28475110

  11. DIANA-microT web server: elucidating microRNA functions through target prediction.

    PubMed

    Maragkakis, M; Reczko, M; Simossis, V A; Alexiou, P; Papadopoulos, G L; Dalamagas, T; Giannopoulos, G; Goumas, G; Koukis, E; Kourtis, K; Vergoulis, T; Koziris, N; Sellis, T; Tsanakas, P; Hatzigeorgiou, A G

    2009-07-01

    Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT.

  12. Public Auditing with Privacy Protection in a Multi-User Model of Cloud-Assisted Body Sensor Networks.

    PubMed

    Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu

    2017-05-05

    Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users' medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs' applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs' deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models.

  13. Do You Know where Your Data Are?

    ERIC Educational Resources Information Center

    Bennett, Cedric

    2006-01-01

    Many of the information security appliances, devices, and techniques currently in use are designed to keep unwanted users and Internet traffic away from important information assets by denying unauthorized access to servers, databases, networks, storage media, and other underlying technology resources. These approaches employ firewalls, intrusion…

  14. Automating NEURON Simulation Deployment in Cloud Resources.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.

  15. Automating NEURON Simulation Deployment in Cloud Resources

    PubMed Central

    Santamaria, Fidel

    2016-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341

  16. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    PubMed

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  17. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    PubMed Central

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  18. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  19. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  20. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  1. Performance evaluation of continuity of care records (CCRs): parsing models in a mobile health management system.

    PubMed

    Chen, Hung-Ming; Liou, Yong-Zan

    2014-10-01

    In a mobile health management system, mobile devices act as the application hosting devices for personal health records (PHRs) and the healthcare servers construct to exchange and analyze PHRs. One of the most popular PHR standards is continuity of care record (CCR). The CCR is expressed in XML formats. However, parsing is an expensive operation that can degrade XML processing performance. Hence, the objective of this study was to identify different operational and performance characteristics for those CCR parsing models including the XML DOM parser, the SAX parser, the PULL parser, and the JSON parser with regard to JSON data converted from XML-based CCR. Thus, developers can make sensible choices for their target PHR applications to parse CCRs when using mobile devices or servers with different system resources. Furthermore, the simulation experiments of four case studies are conducted to compare the parsing performance on Android mobile devices and the server with large quantities of CCR data.

  2. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE PAGES

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  3. tRNAscan-SE On-line: integrating search and context for analysis of transfer RNA genes.

    PubMed

    Lowe, Todd M; Chan, Patricia P

    2016-07-08

    High-throughput genome sequencing continues to grow the need for rapid, accurate genome annotation and tRNA genes constitute the largest family of essential, ever-present non-coding RNA genes. Newly developed tRNAscan-SE 2.0 has advanced the state-of-the-art methodology in tRNA gene detection and functional prediction, captured by rich new content of the companion Genomic tRNA Database. Previously, web-server tRNA detection was isolated from knowledge of existing tRNAs and their annotation. In this update of the tRNAscan-SE On-line resource, we tie together improvements in tRNA classification with greatly enhanced biological context via dynamically generated links between web server search results, the most relevant genes in the GtRNAdb and interactive, rich genome context provided by UCSC genome browsers. The tRNAscan-SE On-line web server can be accessed at http://trna.ucsc.edu/tRNAscan-SE/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. ESTEST: A Framework for the Verification and Validation of Electronic Structure Codes

    NASA Astrophysics Data System (ADS)

    Yuan, Gary; Gygi, Francois

    2011-03-01

    ESTEST is a verification and validation (V& V) framework for electronic structure codes that supports Qbox, Quantum Espresso, ABINIT, the Exciting Code and plans support for many more. We discuss various approaches to the electronic structure V& V problem implemented in ESTEST, that are related to parsing, formats, data management, search, comparison and analyses. Additionally, an early experiment in the distribution of V& V ESTEST servers among the electronic structure community will be presented. Supported by NSF-OCI 0749217 and DOE FC02-06ER25777.

  5. 76 FR 63928 - Circulatory System Devices Panel of the Medical Devices Advisory Committee; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-14

    ... medical professionals. The database is a Web- based server that contains software, which receives data transmitted from the electronics unit, and presents the data for review by medical professionals. FDA intends...

  6. Energy Systems Integration Facility | NREL

    Science.gov Websites

    influence how electric power systems operate far into the future. LEARN MORE Sharing Knowledge Recent 2017 Journal Article Wind and Solar Resource Data Sets Technical Report Innovation Incubator , Liquid Submerged Server for High-Efficiency Data Centers News and Announcements News More News News

  7. It's Bandwagon Time

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2008-01-01

    Colleges and universities worldwide are turning to the hosted Software as a Service (SaaS) model and saying goodbye to issues like patch management and server optimization. Generally, SaaS involves applications (such as customer relationship management (CRM), enterprise resource planning (ERP), and e-mail) offered by a hosting provider. In this…

  8. The PARIGA server for real time filtering and analysis of reciprocal BLAST results.

    PubMed

    Orsini, Massimiliano; Carcangiu, Simone; Cuccuru, Gianmauro; Uva, Paolo; Tramontano, Anna

    2013-01-01

    BLAST-based similarity searches are commonly used in several applications involving both nucleotide and protein sequences. These applications span from simple tasks such as mapping sequences over a database to more complex procedures as clustering or annotation processes. When the amount of analysed data increases, manual inspection of BLAST results become a tedious procedure. Tools for parsing or filtering BLAST results for different purposes are then required. We describe here PARIGA (http://resources.bioinformatica.crs4.it/pariga/), a server that enables users to perform all-against-all BLAST searches on two sets of sequences selected by the user. Moreover, since it stores the two BLAST output in a python-serialized-objects database, results can be filtered according to several parameters in real-time fashion, without re-running the process and avoiding additional programming efforts. Results can be interrogated by the user using logical operations, for example to retrieve cases where two queries match same targets, or when sequences from the two datasets are reciprocal best hits, or when a query matches a target in multiple regions. The Pariga web server is designed to be a helpful tool for managing the results of sequence similarity searches. The design and implementation of the server renders all operations very fast and easy to use.

  9. Outcrop descriptions and fossils from the Upper Cretaceous Frontier Formation, Wind River Basin and adjacent areas, Wyoming: Chapter 11 in Petroleum systems and geologic assessment of oil and gas resources in the Wind River Basin Province, Wyoming

    USGS Publications Warehouse

    Merewether, E.A.; Cobban, W.A.

    2007-01-01

    The index maps used to show locations of outcrop sections and fossil collections are from scanned versions of U.S. Geological Survey topographic maps of various scales and were obtained from TerraServer®. The portion of each map used depended on the areal distribution of the localities involved. The named quadrangles used for locality descriptions, however, all refer to 7½-minute, 1:24,000-scale quadrangles (for example, “Alcova”). The aerial photographs also are from TerraServer®; http://www.terraserver.com/.

  10. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    USGS Publications Warehouse

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  11. Research on a Method of Geographical Information Service Load Balancing

    NASA Astrophysics Data System (ADS)

    Li, Heyuan; Li, Yongxing; Xue, Zhiyong; Feng, Tao

    2018-05-01

    With the development of geographical information service technologies, how to achieve the intelligent scheduling and high concurrent access of geographical information service resources based on load balancing is a focal point of current study. This paper presents an algorithm of dynamic load balancing. In the algorithm, types of geographical information service are matched with the corresponding server group, then the RED algorithm is combined with the method of double threshold effectively to judge the load state of serve node, finally the service is scheduled based on weighted probabilistic in a certain period. At the last, an experiment system is built based on cluster server, which proves the effectiveness of the method presented in this paper.

  12. Towards Microeconomic Resource Sharing in End System Multicast Networks Based on Walrasian General Equilibrium

    NASA Astrophysics Data System (ADS)

    Rezvani, Mohammad Hossein; Analoui, Morteza

    2010-11-01

    We have designed a competitive economical mechanism for application level multicast in which a number of independent services are provided to the end-users by a number of origin servers. Each offered service can be thought of as a commodity and the origin servers and the users who relay the service to their downstream nodes can thus be thought of as producers of the economy. Also, the end-users can be viewed as consumers of the economy. The proposed mechanism regulates the price of each service in such a way that general equilibrium holds. So, all allocations will be Pareto optimal in the sense that the social welfare of the users is maximized.

  13. E-Valuation: Pricing E-Learning.

    ERIC Educational Resources Information Center

    Hartley, Darin E.

    2001-01-01

    Looks at the ways that electronic learning is priced in organizations and the factors that influence the pricing. Discusses pros and cons of several pricing options: price per seat, subscription, pay as you go, per server, free, and payment based on time. (JOW)

  14. 12 CFR 1732.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... home computer systems of an employee; or (4) Whether the information is active or inactive. (k) Record... (e.g., e-mail, databases, spreadsheets, PowerPoint presentations, electronic reporting systems... information is stored or located, including network servers, desktop or laptop computers and handheld...

  15. Implementation of an Enterprise Information Portal (EIP) in the Loyola University Health System

    PubMed Central

    Price, Ronald N.; Hernandez, Kim

    2001-01-01

    Loyola University Chicago Stritch School of Medicine and Loyola University Medical Center have long histories in the development of applications to support the institutions' missions of education, research and clinical care. In late 1998, the institutions' application development group undertook an ambitious program to re-architecture more than 10 years of legacy application development (30+ core applications) into a unified World Wide Web (WWW) environment. The primary project objectives were to construct an environment that would support the rapid development of n-tier, web-based applications while providing standard methods for user authentication/validation, security/access control and definition of a user's organizational context. The project's efforts resulted in Loyola's Enterprise Information Portal (EIP), which meets the aforementioned objectives. This environment: 1) allows access to other vertical Intranet portals (e.g., electronic medical record, patient satisfaction information and faculty effort); 2) supports end-user desktop customization; and 3) provides a means for standardized application “look and feel.” The portal was constructed utilizing readily available hardware and software. Server hardware consists of multiprocessor (Intel Pentium 500Mhz) Compaq 6500 servers with one gigabyte of random access memory and 75 gigabytes of hard disk storage. Microsoft SQL Server was selected to house the portal's internal or security data structures. Netscape Enterprise Server was selected for the web server component of the environment and Allaire's ColdFusion was chosen for access and application tiers. Total costs for the portal environment was less than $40,000. User data storage is accomplished through two Microsoft SQL Servers and an existing SUN Microsystems enterprise server with eight processors, 750 gigabytes of disk storage operating Sybase relational database manager. Total storage capacity for all system exceeds one terabyte. In the past 12 months, the EIP has supported development of more than 88 applications and is utilized by more than 2,200 users.

  16. Contingency Contractor Optimization Phase 3 Sustainment Platform Requirements - Contingency Contractor Optimization Tool - Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durfee, Justin David; Frazier, Christopher Rawls; Bandlow, Alisa

    Sandia National Laboratories (Sandia) is in Phase 3 Sustainment of development of a prototype tool, currently referred to as the Contingency Contractor Optimization Tool - Prototype (CCOTP), under the direction of OSD Program Support. CCOT-P is intended to help provide senior Department of Defense (DoD) leaders with comprehensive insight into the global availability, readiness and capabilities of the Total Force Mix. The CCOT-P will allow senior decision makers to quickly and accurately assess the impacts, risks and mitigating strategies for proposed changes to force/capabilities assignments, apportionments and allocations options, focusing specifically on contingency contractor planning. During Phase 2 of themore » program, conducted during fiscal year 2012, Sandia developed an electronic storyboard prototype of the Contingency Contractor Optimization Tool that can be used for communication with senior decision makers and other Operational Contract Support (OCS) stakeholders. Phase 3 used feedback from demonstrations of the electronic storyboard prototype to develop an engineering prototype for planners to evaluate. Sandia worked with the DoD and Joint Chiefs of Staff strategic planning community to get feedback and input to ensure that the engineering prototype was developed to closely align with future planning needs. The intended deployment environment was also a key consideration as this prototype was developed. Initial release of the engineering prototype was done on servers at Sandia in the middle of Phase 3. In 2013, the tool was installed on a production pilot server managed by the OUSD(AT&L) eBusiness Center. The purpose of this document is to specify the CCOT-P engineering prototype platform requirements as of May 2016. Sandia developed the CCOT-P engineering prototype using common technologies to minimize the likelihood of deployment issues. CCOT-P engineering prototype was architected and designed to be as independent as possible of the major deployment components such as the server hardware, the server operating system, the database, and the web server. This document describes the platform requirements, the architecture, and the implementation details of the CCOT-P engineering prototype.« less

  17. An Internet Gopher to Support Graduate Education and Professional Development for School Administrators.

    ERIC Educational Resources Information Center

    Gonzalez, Josue M.

    1995-01-01

    Describes the design and installation of an Internet gopher server to support classroom instruction and professional development projects in a graduate college of education. Topics include use by administrators, selecting the most appropriate technology, hardware and software selection, and informational resources of the gopher. (Author/LRW)

  18. Outsourcing. CAUT Briefing Note

    ERIC Educational Resources Information Center

    Canadian Association of University Teachers, 2015

    2015-01-01

    Increasingly, university and college administrators are contracting out their institutions' operational information technology (IT) needs. Specifically, this means moving from an IT system with a server based at the institution with an in house IT support staff to a cloud based system outside the institution where most of the staff resources are…

  19. Networked Multimedia: Are We There Yet?

    ERIC Educational Resources Information Center

    Wyman, Bill

    1997-01-01

    Discusses the technological advances in electronic communication over the last 30 years. Touches on various real-time interactive multimedia communications, including video on demand, videocassettes, laser discs, CD-ROM, a history of networking, terminal/host and client/server networking, intraoperability and interoperability and multimedia…

  20. DOMe: A deduplication optimization method for the NewSQL database backups

    PubMed Central

    Wang, Longxiang; Zhu, Zhengdong; Zhang, Xingjun; Wang, Yinfeng

    2017-01-01

    Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication method is not optimized for the NewSQL server system and cannot take full advantage of hardware resources to optimize deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to optimize the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication optimization method (DOMe) for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication method. The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1) DOMe can reduce the duplicated NewSQL backup data. 2) DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3) DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index optimization method. PMID:29049307

  1. NASA Cloud-Based Climate Data Services

    NASA Astrophysics Data System (ADS)

    McInerney, M. A.; Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, W. D., III; Thompson, J. H.; Gill, R.; Jasen, J. E.; Samowich, B.; Pobre, Z.; Salmon, E. M.; Rumney, G.; Schardt, T. D.

    2012-12-01

    Cloud-based scientific data services are becoming an important part of NASA's mission. Our technological response is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service (VaaS). A virtual climate data server (vCDS) is an Open Archive Information System (OAIS) compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have deployed vCDS Version 1.0 in the Amazon EC2 cloud using S3 object storage and are using the system to deliver a subset of NASA's Intergovernmental Panel on Climate Change (IPCC) data products to the latest CentOS federated version of Earth System Grid Federation (ESGF), which is also running in the Amazon cloud. vCDS-managed objects are exposed to ESGF through FUSE (Filesystem in User Space), which presents a POSIX-compliant filesystem abstraction to applications such as the ESGF server that require such an interface. A vCDS manages data as a distinguished collection for a person, project, lab, or other logical unit. A vCDS can manage a collection across multiple storage resources using rules and microservices to enforce collection policies. And a vCDS can federate with other vCDSs to manage multiple collections over multiple resources, thereby creating what can be thought of as an ecosystem of managed collections. With the vCDS approach, we are trying to enable the full information lifecycle management of scientific data collections and make tractable the task of providing diverse climate data services. In this presentation, we describe our approach, experiences, lessons learned, and plans for the future.; (A) vCDS/ESG system stack. (B) Conceptual architecture for NASA cloud-based data services.

  2. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    PubMed

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  3. Computational Prediction of the Immunomodulatory Potential of RNA Sequences.

    PubMed

    Nagpal, Gandharva; Chaudhary, Kumardeep; Dhanda, Sandeep Kumar; Raghava, Gajendra Pal Singh

    2017-01-01

    Advances in the knowledge of various roles played by non-coding RNAs have stimulated the application of RNA molecules as therapeutics. Among these molecules, miRNA, siRNA, and CRISPR-Cas9 associated gRNA have been identified as the most potent RNA molecule classes with diverse therapeutic applications. One of the major limitations of RNA-based therapeutics is immunotoxicity of RNA molecules as it may induce the innate immune system. In contrast, RNA molecules that are potent immunostimulators are strong candidates for use in vaccine adjuvants. Thus, it is important to understand the immunotoxic or immunostimulatory potential of these RNA molecules. The experimental techniques for determining immunostimulatory potential of siRNAs are time- and resource-consuming. To overcome this limitation, recently our group has developed a web-based server "imRNA" for predicting the immunomodulatory potential of RNA sequences. This server integrates a number of modules that allow users to perform various tasks including (1) generation of RNA analogs with reduced immunotoxicity, (2) identification of highly immunostimulatory regions in RNA sequence, and (3) virtual screening. This server may also assist users in the identification of minimum mutations required in a given RNA sequence to minimize its immunomodulatory potential that is required for designing RNA-based therapeutics. Besides, the server can be used for designing RNA-based vaccine adjuvants as it may assist users in the identification of mutations required for increasing immunomodulatory potential of a given RNA sequence. In summary, this chapter describes major applications of the "imRNA" server in designing RNA-based therapeutics and vaccine adjuvants (http://www.imtech.res.in/raghava/imrna/).

  4. UC Irvine CHRS Real-time Global Satellite Precipitation Monitoring System (G-WADI PERSIANN-CCS GeoServer) for Hydrometeorological Applications

    NASA Astrophysics Data System (ADS)

    Sorooshian, S.; Hsu, K. L.; Gao, X.; Imam, B.; Nguyen, P.; Braithwaite, D.; Logan, W. S.; Mishra, A.

    2015-12-01

    The G-WADI Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) GeoServer has been successfully developed by the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California Irvine in collaboration with the UNESCO's International Hydrological Programme (IHP) and a number of its international centers. The system employs state-of-the-art technologies in remote sensing and artificial intelligence to estimate precipitation globally from satellite imagery in real-time and high spatiotemporal resolution (4km, hourly). It offers graphical tools and data service to help the user in emergency planning and management for natural disasters related to hydrological processes. The G-WADI PERSIANN-CCS GeoServer has been upgraded with new user-friendly functionalities. The precipitation data generated by the GeoServer is disseminated to the user community through support provided by ICIWaRM (The International Center for Integrated Water Resources Management), UNESCO and UC Irvine. Recently a number of new applications for mobile devices have been developed by our students. The RainMapper has been available on App Store and Google Play for the real-time PERSIANN-CCS observations. A global crowd sourced rainfall reporting system named iRain has also been developed to engage the public globally to provide qualitative information about real-time precipitation in their location which will be useful in improving the quality of the PERSIANN-CCS data. A number of recent examples of the application and use of the G-WADI PERSIANN-CCS GeoServer information will also be presented.

  5. VoIP attacks detection engine based on neural network

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Slachta, Jiri

    2015-05-01

    The security is crucial for any system nowadays, especially communications. One of the most successful protocols in the field of communication over IP networks is Session Initiation Protocol. It is an open-source project used by different kinds of applications, both open-source and proprietary. High penetration and text-based principle made SIP number one target in IP telephony infrastructure, so security of SIP server is essential. To keep up with hackers and to detect potential malicious attacks, security administrator needs to monitor and evaluate SIP traffic in the network. But monitoring and following evaluation could easily overwhelm the security administrator in networks, typically in networks with a number of SIP servers, users and logically or geographically separated networks. The proposed solution lies in automatic attack detection systems. The article covers detection of VoIP attacks through a distributed network of nodes. Then the gathered data analyze aggregation server with artificial neural network. Artificial neural network means multilayer perceptron network trained with a set of collected attacks. Attack data could also be preprocessed and verified with a self-organizing map. The source data is detected by distributed network of detection nodes. Each node contains a honeypot application and traffic monitoring mechanism. Aggregation of data from each node creates an input for neural networks. The automatic classification on a centralized server with low false positive detection reduce the cost of attack detection resources. The detection system uses modular design for easy deployment in final infrastructure. The centralized server collects and process detected traffic. It also maintains all detection nodes.

  6. Information architecture for a federated health record server.

    PubMed

    Kalra, D; Lloyd, D; Austin, T; O'Connor, A; Patterson, D; Ingram, D

    2002-01-01

    This paper describes the information models that have been used to implement a federated health record server and to deploy it in a live clinical setting. The authors, working at the Centre for Health Informatics and Multiprofessional Education (University College London), have built up over a decade of experience within Europe on the requirements and information models that are needed to underpin comprehensive multi-professional electronic health records. This work has involved collaboration with a wide range of health care and informatics organisations and partners in the healthcare computing industry across Europe though the EU Health Telematics projects GEHR, Synapses, EHCR-SupA, SynEx and Medicate. The resulting architecture models have fed into recent European standardisation work in this area, such as CEN TC/251 ENV 13606. UCL has implemented a federated health record server based on these models which is now running in the Department of Cardiovascular Medicine at the Whittington Hospital in North London. The information models described in this paper reflect a refinement based on this implementation experience.

  7. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  8. Trends in free WWW-based E-learning Modules seen from the Learning Resource Server Medicine (LRSMed).

    PubMed

    Stausberg, Jürgen; Geueke, Martin; Bludßat, Kevin

    2005-01-01

    Despite the lost enthusiasm concerning E-learning a lot of material is available on the World Wide Web (WWW) free of charge. This material is collected and systematically described by services like the Learning Resource Server Medicine (LRSMed) at http://mmedia.medizin.uni-essen.de/portal/. With the LRSMed E-learning modules are made available for medical students by means of a metadata description that can be used for a catalogue search. The number of resources included has risen enormously from 100 in 1999 up to 805 today. Especially in 2004 there was an exponential increase in the LRSMed's content. Anatomy is still the field with the highest amount of available material, but general medicine has improved its position over the years and is now the second one. Technically and didactically simple material as scripts, textbooks, and link lists (called info services) is still dominating. Similar to 1999, there is not one module which could be truly referred to as tutorial dialogue. Simple material can not replace face-to-face-teaching. But it could be combined with conventional courses to establish some kind of blending learning. The scene of free E-learning modules on the WWW is ready to meet current challenges for efficient training of students and continuing education in medicine.

  9. Electronic Attack Platform Placement Optimization

    DTIC Science & Technology

    2014-09-01

    Processing in VBA ...............................................................33 2. Client-Server Using Two Different Excel Application...6 Figure 3. Screenshot of the VBA IDE contained within all Microsoft Office products...application using MS Excel’s Applicatin.OnTime method. .....................................33 Figure 20. WINSOCK API Functions needed to use TCP via VBA

  10. Help Is on the WAIS.

    ERIC Educational Resources Information Center

    Lukanuski, Mary

    1992-01-01

    Describes the development of the WAIS (Wide Area Information Servers) protocol, a system that allows users access to personal, corporate, and commercial electronic information from one interface. The availability of WAIS on the Internet and the reactions of users are addressed. Several problems are considered, including funding, hardware…

  11. DEMS - a second generation diabetes electronic management system.

    PubMed

    Gorman, C A; Zimmerman, B R; Smith, S A; Dinneen, S F; Knudsen, J B; Holm, D; Jorgensen, B; Bjornsen, S; Planet, K; Hanson, P; Rizza, R A

    2000-06-01

    Diabetes electronic management system (DEMS) is a component-based client/server application, written in Visual C++ and Visual Basic, with the database server running Sybase System 11. DEMS is built entirely with a combination of dynamic link libraries (DLLs) and ActiveX components - the only exception is the DEMS.exe. DEMS is a chronic disease management system for patients with diabetes. It is used at the point of care by all members of the diabetes team including physicians, nurses, dieticians, clinical assistants and educators. The system is designed for maximum clinical efficiency and facilitates appropriately supervised delegation of care. Dispersed clinical sites may be supervised from a central location. The system is designed for ease of navigation; immediate provision of many types of automatically generated reports; quality audits; aids to compliance with good care guidelines; and alerts, advisories, prompts, and warnings that guide the care provider. The system now contains data on over 34000 patients and is in daily use at multiple sites.

  12. CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.

    PubMed

    Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali

    2016-01-13

    Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.

  13. Two interactive Bioinformatics courses at the Bielefeld University Bioinformatics Server.

    PubMed

    Sczyrba, Alexander; Konermann, Susanne; Giegerich, Robert

    2008-05-01

    Conferences in computational biology continue to provide tutorials on classical and new methods in the field. This can be taken as an indicator that education is still a bottleneck in our field's process of becoming an established scientific discipline. Bielefeld University has been one of the early providers of bioinformatics education, both locally and via the internet. The Bielefeld Bioinformatics Server (BiBiServ) offers a variety of older and new materials. Here, we report on two online courses made available recently, one introductory and one on the advanced level: (i) SADR: Sequence Analysis with Distributed Resources (http://bibiserv.techfak.uni-bielefeld.de/sadr/) and (ii) ADP: Algebraic Dynamic Programming in Bioinformatics (http://bibiserv.techfak.uni-bielefeld.de/dpcourse/).

  14. Utilization of Virtual Server Technology in Mission Operations

    NASA Technical Reports Server (NTRS)

    Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.

    2010-01-01

    Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.

  15. Fault-Tolerant Local-Area Network

    NASA Technical Reports Server (NTRS)

    Morales, Sergio; Friedman, Gary L.

    1988-01-01

    Local-area network (LAN) for computers prevents single-point failure from interrupting communication between nodes of network. Includes two complete cables, LAN 1 and LAN 2. Microprocessor-based slave switches link cables to network-node devices as work stations, print servers, and file servers. Slave switches respond to commands from master switch, connecting nodes to two cable networks or disconnecting them so they are completely isolated. System monitor and control computer (SMC) acts as gateway, allowing nodes on either cable to communicate with each other and ensuring that LAN 1 and LAN 2 are fully used when functioning properly. Network monitors and controls itself, automatically routes traffic for efficient use of resources, and isolates and corrects its own faults, with potential dramatic reduction in time out of service.

  16. Restrictions impeding web-based courses: a survey of publishers' variation in authorising access to high quality on-line literature.

    PubMed

    Langlois, Michele; Heller, Richard F; Edwards, Richard; Lyratzopoulos, Georgios; Sandars, John

    2004-04-07

    Web-based delivery of educational programmes is becoming increasingly popular and is expected to expand, especially in medicine. The successful implementation of these programmes is reliant on their ability to provide access to web based materials, including high quality published work. Publishers' responses to requests to access health literature in the context of developing an electronic Master's degree course are described. Two different permission requests were submitted to publishers. The first was to store an electronic version of a journal article, to which we subscribe, on a secure password protected server. The second was to reproduce extracts of published material on password protected web pages and CD Rom. Eight of 16 publishers were willing to grant permission to store electronic versions of articles without levying charges additional to the subscription. Twenty of 35 publishers gave permission to reproduce extracts of published work at no fee. Publishers' responses were highly variable to the requests for access to published material. This may be influenced by vague terminology within the 'fair dealing' provision in the copyright legislation, which seems to leave it open to individual interpretation. Considerable resource costs were incurred by the exercise. Time expended included those incurred by us: research to identify informed representatives within the publishing organisation, request 'chase-ups' and alternative examples being sought if publishers were uncooperative; and the publisher when dealing with numerous permission requests. Financial costs were also incurred by both parties through additional staffing and paperwork generated by the permission process, the latter including those purely borne by educators due to the necessary provision of photocopy 'course packs' when no suitably alternative material could be found if publishers were uncooperative. Finally we discuss the resultant bias in material towards readily available electronic resources as a result of publisher's uncooperative stance and encourage initiatives that aim to improve open electronic access. The permission request process has been expensive and has resulted in reduced access for students to the relevant literature. Variations in the responses from publishers suggest that for educational purposes common policies could be agreed and unnecessary restrictions removed in the future.

  17. An Integrated Approach to Engineering Education in a Minority Community

    NASA Technical Reports Server (NTRS)

    Taylor, Bill

    1998-01-01

    Northeastern New Mexico epitomizes regions which are economically depressed, rural, and predominantly Hispanic. New Mexico Highlands University (NMHU), with a small student population of approximately 2800, offers a familiar environment attracting students who might otherwise not attend college. An outreach computer network of minority schools was created in northeastern New Mexico with NASA funding. Rural and urban minority schools gained electronic access to each other, to computer resources, to technical help at New Mexico Highlands University and gained access to the world via the Internet. This outreach program was initiated in the fall of 1992 in an effort to attract and to involve minority students in Engineering and the Mathematical Sciences. We installed 56 Kbs Internet connections to eight elementary schools, two middle schools, two high schools, a public library (servicing the home schooling community) and an International Baccalaureate school. For another fourteen rural schools, we provided computers and free dial-up service to servers on the New Mexico Highlands University campus.

  18. AMON: Transition to real-time operations

    NASA Astrophysics Data System (ADS)

    Cowen, D. F.; Keivani, A.; Tešić, G.

    2016-04-01

    The Astrophysical Multimessenger Observatory Network (AMON) will link the world's leading high-energy neutrino, cosmic-ray, gamma-ray and gravitational wave observatories by performing real-time coincidence searches for multimessenger sources from observatories' subthreshold data streams. The resulting coincidences will be distributed to interested parties in the form of electronic alerts for real-time follow-up observation. We will present the science case, design elements, current and projected partner observatories, status of the AMON project, and an initial AMON-enabled analysis. The prototype of the AMON server has been online since August 2014 and processing archival data. Currently, we are deploying new high-uptime servers and will be ready to start issuing alerts as early as winter 2015/16.

  19. PropeR revisited.

    PubMed

    van der Linden, Helma; Talmon, Jan; Tange, Huibert; Grimson, Jane; Hasman, Arie

    2005-03-01

    The PropeR EHR system (PropeRWeb) is a multidisciplinary electronic health record (EHR) system for multidisciplinary use in extramural patient care for stroke patients. The system is built using existing open source components and is based on open standards. It is implemented as a web application using servlets and Java Server Pages (JSP's) with a CORBA connection to the database servers, which are based on the OMG HDTF specifications. PropeRWeb is a generic system which can be readily customized for use in a variety of clinical domains. The system proved to be stable and flexible, although some aspects (a.o. user friendliness) could be improved. These improvements are currently under development in a second version.

  20. Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"

    ERIC Educational Resources Information Center

    Romiszowski, Alexander J.

    2012-01-01

    "Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…

  1. Using Cluster Analysis for Data Mining in Educational Technology Research

    ERIC Educational Resources Information Center

    Antonenko, Pavlo D.; Toy, Serkan; Niederhauser, Dale S.

    2012-01-01

    Cluster analysis is a group of statistical methods that has great potential for analyzing the vast amounts of web server-log data to understand student learning from hyperlinked information resources. In this methodological paper we provide an introduction to cluster analysis for educational technology researchers and illustrate its use through…

  2. Google Scholar Usage: An Academic Library's Experience

    ERIC Educational Resources Information Center

    Wang, Ya; Howard, Pamela

    2012-01-01

    Google Scholar is a free service that provides a simple way to broadly search for scholarly works and to connect patrons with the resources libraries provide. The researchers in this study analyzed Google Scholar usage data from 2006 for three library tools at San Francisco State University: SFX link resolver, Web Access Management proxy server,…

  3. 78 FR 41868 - Energy Conservation Program for Consumer Products and Certain Commercial and Industrial Equipment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-12

    ... services and manage networked resources for client devices such as desktop and laptop computers. These... include desktop or laptop computers, which are not primarily accessed via network connections. DOE seeks... Determination of Computer Servers as a Covered Consumer Product AGENCY: Office of Energy Efficiency and...

  4. TAM 2.0: tool for MicroRNA set analysis.

    PubMed

    Li, Jianwei; Han, Xiaofen; Wan, Yanping; Zhang, Shan; Zhao, Yingshu; Fan, Rui; Cui, Qinghua; Zhou, Yuan

    2018-06-06

    With the rapid accumulation of high-throughput microRNA (miRNA) expression profile, the up-to-date resource for analyzing the functional and disease associations of miRNAs is increasingly demanded. We here describe the updated server TAM 2.0 for miRNA set enrichment analysis. Through manual curation of over 9000 papers, a more than two-fold growth of reference miRNA sets has been achieved in comparison with previous TAM, which covers 9945 and 1584 newly collected miRNA-disease and miRNA-function associations, respectively. Moreover, TAM 2.0 allows users not only to test the functional and disease annotations of miRNAs by overrepresentation analysis, but also to compare the input de-regulated miRNAs with those de-regulated in other disease conditions via correlation analysis. Finally, the functions for miRNA set query and result visualization are also enabled in the TAM 2.0 server to facilitate the community. The TAM 2.0 web server is freely accessible at http://www.scse.hebut.edu.cn/tam/ or http://www.lirmed.com/tam2/.

  5. Viewing ISS Data in Real Time via the Internet

    NASA Technical Reports Server (NTRS)

    Myers, Gerry; Chamberlain, Jim

    2004-01-01

    EZStream is a computer program that enables authorized users at diverse terrestrial locations to view, in real time, data generated by scientific payloads aboard the International Space Station (ISS). The only computation/communication resource needed for use of EZStream is a computer equipped with standard Web-browser software and a connection to the Internet. EZStream runs in conjunction with the TReK software, described in a prior NASA Tech Briefs article, that coordinates multiple streams of data for the ground communication system of the ISS. EZStream includes server components that interact with TReK within the ISS ground communication system and client components that reside in the users' remote computers. Once an authorized client has logged in, a server component of EZStream pulls the requested data from a TReK application-program interface and sends the data to the client. Future EZStream enhancements will include (1) extensions that enable the server to receive and process arbitrary data streams on its own and (2) a Web-based graphical-user-interface-building subprogram that enables a client who lacks programming expertise to create customized display Web pages.

  6. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  7. Monitoring Evolution at CERN

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Fiorini, B.; Murphy, S.; Pigueiras, L.; Santos, M.

    2015-12-01

    Over the past two years, the operation of the CERN Data Centres went through significant changes with the introduction of new mechanisms for hardware procurement, new services for cloud provisioning and configuration management, among other improvements. These changes resulted in an increase of resources being operated in a more dynamic environment. Today, the CERN Data Centres provide over 11000 multi-core processor servers, 130 PB disk servers, 100 PB tape robots, and 150 high performance tape drives. To cope with these developments, an evolution of the data centre monitoring tools was also required. This modernisation was based on a number of guiding rules: sustain the increase of resources, adapt to the new dynamic nature of the data centres, make monitoring data easier to share, give more flexibility to Service Managers on how they publish and consume monitoring metrics and logs, establish a common repository of monitoring data, optimise the handling of monitoring notifications, and replace the previous toolset by new open source technologies with large adoption and community support. This contribution describes how these improvements were delivered, present the architecture and technologies of the new monitoring tools, and review the experience of its production deployment.

  8. The Scientific Uplink and User Support System for SIRTF

    NASA Astrophysics Data System (ADS)

    Heinrichsen, I.; Chavez, J.; Hartley, B.; Mei, Y.; Potts, S.; Roby, T.; Turek, G.; Valjavec, E.; Wu, X.

    The Space Infrared Telescope Facility (SIRTF) is one of NASA's Great Observatory missions, scheduled for launch in 2001. As such its ground segment design is driven by the requirement to provide strong support for the entire astronomical community starting with the call for Legacy Proposals in early 2000. In this contribution, we present the astronomical user interface and the design of the server software that comprises the Scientific Uplink System for SIRTF. The software architecture is split into three major parts: A front-end Java application deployed to the astronomical community providing the capabilities to visualize and edit proposals and the associated lists of observations. This observer toolkit provides templates to define all parameters necessary to carry out the required observations. A specialized version of this software, based on the same overall architecture, is used internal to the SIRTF Science Center to prepare calibration and engineering observations. A Weblogic (TM) based middleware component brokers the transactions with the servers, astronomical image and catalog sources as well as the SIRTF operational databases. Several server systems perform the necessary computations, to obtain resource estimates, target visibilities and to access the instrument models for signal to noise calculations. The same server software is used internally at a later stage to derive the detailed command sequences needed by the SIRTF instruments and spacecraft to execute a given observation.

  9. aLeaves facilitates on-demand exploration of metazoan gene family trees on MAFFT sequence alignment server with enhanced interactivity.

    PubMed

    Kuraku, Shigehiro; Zmasek, Christian M; Nishimura, Osamu; Katoh, Kazutaka

    2013-07-01

    We report a new web server, aLeaves (http://aleaves.cdb.riken.jp/), for homologue collection from diverse animal genomes. In molecular comparative studies involving multiple species, orthology identification is the basis on which most subsequent biological analyses rely. It can be achieved most accurately by explicit phylogenetic inference. More and more species are subjected to large-scale sequencing, but the resultant resources are scattered in independent project-based, and multi-species, but separate, web sites. This complicates data access and is becoming a serious barrier to the comprehensiveness of molecular phylogenetic analysis. aLeaves, launched to overcome this difficulty, collects sequences similar to an input query sequence from various data sources. The collected sequences can be passed on to the MAFFT sequence alignment server (http://mafft.cbrc.jp/alignment/server/), which has been significantly improved in interactivity. This update enables to switch between (i) sequence selection using the Archaeopteryx tree viewer, (ii) multiple sequence alignment and (iii) tree inference. This can be performed as a loop until one reaches a sensible data set, which minimizes redundancy for better visibility and handling in phylogenetic inference while covering relevant taxa. The work flow achieved by the seamless link between aLeaves and MAFFT provides a convenient online platform to address various questions in zoology and evolutionary biology.

  10. Simplifying the Analysis of Data from Multiple Heliophysics Instruments and Missions

    NASA Astrophysics Data System (ADS)

    Bazell, D.; Vandegriff, J. D.

    2014-12-01

    Understanding the intertwined plasma, particles and fields connecting the Sun and the Earth requires combining data from many diverse sources, but there are still many technological barriers that complicate the merging of data from different instruments and missions. We present an emerging data serving capability that provides a uniform way to access heterogeneous and distributed data. The goal of our data server is to provide a standardized data access mechanism that is identical for data of any format and layout (CDF, custom binary, FITS, netCDF, CSV and other flavors of ASCII, etc). Data remain in their original format and location (i.e., at instrument team sites or existing data centers), and our data server delivers a dynamically reformatted view of the data. Scientists can then use tools (clients that talk to the server) that offer a single interface for browsing, analyzing or downloading many different contemporary and legacy heliophysics data sets. Our current server accesses many CDF data resources at CDAWeb, as well as multiple other instrument team sites. Our webservice will be deployed on the Amazon Cloud at http://datashop.elasticbeanstalk.com/. Two basic clients will also be demonstrated: one in Java and one in IDL. Python, Perl, and Matlab clients are also planned. Complex missions such as Solar Orbiter and Solar Probe Plus will benefit greatly from tools that enable multi-instrument and multi-mission data comparison.

  11. aLeaves facilitates on-demand exploration of metazoan gene family trees on MAFFT sequence alignment server with enhanced interactivity

    PubMed Central

    Kuraku, Shigehiro; Zmasek, Christian M.; Nishimura, Osamu; Katoh, Kazutaka

    2013-01-01

    We report a new web server, aLeaves (http://aleaves.cdb.riken.jp/), for homologue collection from diverse animal genomes. In molecular comparative studies involving multiple species, orthology identification is the basis on which most subsequent biological analyses rely. It can be achieved most accurately by explicit phylogenetic inference. More and more species are subjected to large-scale sequencing, but the resultant resources are scattered in independent project-based, and multi-species, but separate, web sites. This complicates data access and is becoming a serious barrier to the comprehensiveness of molecular phylogenetic analysis. aLeaves, launched to overcome this difficulty, collects sequences similar to an input query sequence from various data sources. The collected sequences can be passed on to the MAFFT sequence alignment server (http://mafft.cbrc.jp/alignment/server/), which has been significantly improved in interactivity. This update enables to switch between (i) sequence selection using the Archaeopteryx tree viewer, (ii) multiple sequence alignment and (iii) tree inference. This can be performed as a loop until one reaches a sensible data set, which minimizes redundancy for better visibility and handling in phylogenetic inference while covering relevant taxa. The work flow achieved by the seamless link between aLeaves and MAFFT provides a convenient online platform to address various questions in zoology and evolutionary biology. PMID:23677614

  12. Design and Development of a Network-Based Electronic Library.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1994-01-01

    Describes collaboration between the University of California at Berkeley and four other universities to develop interoperable servers containing each participant's Computer Science Technical Reports and to make them available over the Internet using standard protocols. The proposed library architecture, approaches to indexing and retrieval, and…

  13. The Internet Gopher: An Information Sheet.

    ERIC Educational Resources Information Center

    Electronic Networking: Research, Applications and Policy, 1992

    1992-01-01

    This fact sheet about the INTERNET Gopher, an information distribution system combining features of electronic bulletin board services and databases, describes information availability, gateways with other servers, how the system works, and how to access Gopher. Addresses and telephone numbers for additional information or news about Gopher are…

  14. An Educators' Guide to Information Access across the Internet.

    ERIC Educational Resources Information Center

    Hazari, Sunil

    1994-01-01

    A discussion of tools available for use of the Internet, particularly by college and university educators and students, offers information on use of various services, including electronic mailing list servers, data communications protocols for networking, inter-host connections, file transfer protocol, gopher software, bibliographic searching,…

  15. Generic Divide and Conquer Internet-Based Computing

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Radenski, Atanas

    2003-01-01

    The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end Internet nodes. Our project is focused on a generic divide and conquer paradigm and on mobile applications of this paradigm that can operate on a loose and ever changing pool of lower-end Internet nodes.

  16. The Cellosaurus, a Cell-Line Knowledge Resource

    PubMed Central

    Bairoch, Amos

    2018-01-01

    The Cellosaurus is a knowledge resource on cell lines. It aims to describe all cell lines used in biomedical research. Its scope encompasses both vertebrates and invertebrates. Currently, information for >100,000 cell lines is provided. For each cell line, it provides a wealth of information, cross-references, and literature citations. The Cellosaurus is available on the ExPASy server (https://web.expasy.org/cellosaurus/) and can be downloaded in a variety of formats. Among its many uses, the Cellosaurus is a key resource to help researchers identify potentially contaminated/misidentified cell lines, thus contributing to improving the quality of research in the life sciences. PMID:29805321

  17. An openstack-based flexible video transcoding framework in live

    NASA Astrophysics Data System (ADS)

    Shi, Qisen; Song, Jianxin

    2017-08-01

    With the rapid development of mobile live business, transcoding HD video is often a challenge for mobile devices due to their limited processing capability and bandwidth-constrained network connection. For live service providers, it's wasteful for resources to delay lots of transcoding server because some of them are free to work sometimes. To deal with this issue, this paper proposed an Openstack-based flexible transcoding framework to achieve real-time video adaption for mobile device and make computing resources used efficiently. To this end, we introduced a special method of video stream splitting and VMs resource scheduling based on access pressure prediction,which is forecasted by an AR model.

  18. BlueSky Cloud - rapid infrastructure capacity using Amazon's Cloud for wildfire emergency response

    NASA Astrophysics Data System (ADS)

    Haderman, M.; Larkin, N. K.; Beach, M.; Cavallaro, A. M.; Stilley, J. C.; DeWinter, J. L.; Craig, K. J.; Raffuse, S. M.

    2013-12-01

    During peak fire season in the United States, many large wildfires often burn simultaneously across the country. Smoke from these fires can produce air quality emergencies. It is vital that incident commanders, air quality agencies, and public health officials have smoke impact information at their fingertips for evaluating where fires and smoke are and where the smoke will go next. To address the need for this kind of information, the U.S. Forest Service AirFire Team created the BlueSky Framework, a modeling system that predicts concentrations of particle pollution from wildfires. During emergency response, decision makers use BlueSky predictions to make public outreach and evacuation decisions. The models used in BlueSky predictions are computationally intensive, and the peak fire season requires significantly more computer resources than off-peak times. Purchasing enough hardware to run the number of BlueSky Framework runs that are needed during fire season is expensive and leaves idle servers running the majority of the year. The AirFire Team and STI developed BlueSky Cloud to take advantage of Amazon's virtual servers hosted in the cloud. With BlueSky Cloud, as demand increases and decreases, servers can be easily spun up and spun down at a minimal cost. Moving standard BlueSky Framework runs into the Amazon Cloud made it possible for the AirFire Team to rapidly increase the number of BlueSky Framework instances that could be run simultaneously without the costs associated with purchasing and managing servers. In this presentation, we provide an overview of the features of BlueSky Cloud, describe how the system uses Amazon Cloud, and discuss the costs and benefits of moving from privately hosted servers to a cloud-based infrastructure.

  19. Daily Planet Imagery: GIBS MODIS Products on ArcGIS Online

    NASA Astrophysics Data System (ADS)

    Plesea, L.

    2015-12-01

    The NASA EOSDIS Global Imagery Browse Services (GIBS) is rapidly becoming an invaluable GIS resource for the science community and for the public at large. Reliable, fast access to historical as well as near real time, georeferenced images form a solid basis on which many innovative applications and projects can be built. Esri has recognized the value of this effort and is a GIBS user and collaborator. To enable the use of GIBS services within the ArcGIS ecosystem, Esri has built a GIBS reflector server at http://modis.arcgis.com, server which offers the facilities of a time enabled Mosaic Service on top of the GIBS provided images. Currently the MODIS reflectance products are supported by this mosaic service, possibilities of handling other GIBS products are being explored. This reflector service is deployed on the Amazon Elastic Compute Cloud platform, and is freely available to the end users. Due to the excellent response time from GIBS, image tiles do not have to be stored by the Esri mosaic server, all needed data being retrieved directly from GIBS when needed, continuously reflecting the state of GIBS, and greatly simplifying the maintenance of this service. Response latency is usually under one second, making it easy to interact with the data. The remote data access is achieved by using the Geospatial Data Abstraction Library (GDAL) Tiled Web Map Server (TWMS) driver. The response time of this server is excellent, usually under one second. The MODIS imagery has proven to be one of the most popular ones on the ArcGIS Online platform, where it is frequently use to provide temporal context to maps, or by itself, to tell a compelling story.

  20. Data Access Tools And Services At The Goddard Distributed Active Archive Center (GDAAC)

    NASA Technical Reports Server (NTRS)

    Pham, Long; Eng, Eunice; Sweatman, Paul

    2003-01-01

    As one of the largest providers of Earth Science data from the Earth Observing System, GDAAC provides the latest data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Atmospheric Infrared Sounder (AIRS), Solar Radiation and Climate Experiment (SORCE) data products via GDAAC's data pool (50TB of disk cache). In order to make this huge volume of data more accessible to the public and science communities, the GDAAC offers multiple data access tools and services: Open Source Project for Network Data Access Protocol (OPeNDAP), Grid Analysis and Display System (GrADS/DODS) (GDS), Live Access Server (LAS), OpenGlS Web Map Server (WMS) and Near Archive Data Mining (NADM). The objective is to assist users in retrieving electronically a smaller, usable portion of data for further analysis. The OPeNDAP server, formerly known as the Distributed Oceanographic Data System (DODS), allows the user to retrieve data without worrying about the data format. OPeNDAP is capable of server-side subsetting of HDF, HDF-EOS, netCDF, JGOFS, ASCII, DSP, FITS and binary data formats. The GrADS/DODS server is capable of serving the same data formats as OPeNDAP. GDS has an additional feature of server-side analysis. Users can analyze the data on the server there by decreasing the computational load on their client's system. The LAS is a flexible server that allows user to graphically visualize data on the fly, to request different file formats and to compare variables from distributed locations. Users of LAS have options to use other available graphics viewers such as IDL, Matlab or GrADS. WMS is based on the OPeNDAP for serving geospatial information. WMS supports OpenGlS protocol to provide data in GIs-friendly formats for analysis and visualization. NADM is another access to the GDAAC's data pool. NADM gives users the capability to use a browser to upload their C, FORTRAN or IDL algorithms, test the algorithms, and mine data in the data pool. With NADM, the GDAAC provides an environment physically close to the data source. NADM will benefit users with mining or offer data reduction algorithms by reducing large volumes of data before transmission over the network to the user.

  1. 12 CFR 1235.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., or stored by electronic means. E-mail means a document created or received on a computer network for... conduct of the business of a regulated entity or the Office of Finance (which business, in the case of the... is stored or located, including network servers, desktop or laptop computers and handheld computers...

  2. 12 CFR 1235.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., or stored by electronic means. E-mail means a document created or received on a computer network for... conduct of the business of a regulated entity or the Office of Finance (which business, in the case of the... is stored or located, including network servers, desktop or laptop computers and handheld computers...

  3. 5 Key Ways Your Electronic Data May Be at Risk

    ERIC Educational Resources Information Center

    Titus, Aaron

    2008-01-01

    This article describes five organizational policies and behavior that put personal information in jeopardy. These are: (1) Inadequate security for old data; (2) Shadow systems and unregulated servers; (3) Unsophisticated privacy policies; (4) Improper use of Social Security numbers; and (5) Unsanitized old hard drives. Although the academic…

  4. An Efficient Resource Management System for a Streaming Media Distribution Network

    ERIC Educational Resources Information Center

    Cahill, Adrian J.; Sreenan, Cormac J.

    2006-01-01

    This paper examines the design and evaluation of a TV on Demand (TVoD) system, consisting of a globally accessible storage architecture where all TV content broadcast over a period of time is made available for streaming. The proposed architecture consists of idle Internet Service Provider (ISP) servers that can be rented and released dynamically…

  5. Thin Client Architecture for Networking CD-ROMs in a Medium-Sized Public Library System.

    ERIC Educational Resources Information Center

    Turner, Anna

    1997-01-01

    Describes how the Tulsa City-County Library System built a 22-branch CD-ROM-based network with the emerging thin-client/server technology, and succeeded in providing patrons with the most current research tools and information resources available. Discusses costs; networking options; the Citrix WinFrame system used; equipment and connectivity.…

  6. Semiquantum key distribution with secure delegated quantum computation

    PubMed Central

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  7. Expansion and Enhacement of the Wyoming Coalbed Methane Clearinghouse Website to the Wyoming Energy Resources Information Clearinghouse.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulme, Diana; Hamerlinck, Jeffrey; Bergman, Harold

    Energy development is expanding across the United States, particularly in western states like Wyoming. Federal and state land management agencies, local governments, industry and non-governmental organizations have realized the need to access spatially-referenced data and other non-spatial information to determine the geographical extent and cumulative impacts of expanding energy development. The Wyoming Energy Resources Information Clearinghouse (WERIC) is a web-based portal which centralizes access to news, data, maps, reports and other information related to the development, management and conservation of Wyoming's diverse energy resources. WERIC was established in 2006 by the University of Wyoming's Ruckelshaus Institute of Environment and Naturalmore » Resources (ENR) and the Wyoming Geographic Information Science Center (WyGISC) with funding from the US Department of Energy (DOE) and the US Bureau of Land Management (BLM). The WERIC web portal originated in concept from a more specifically focused website, the Coalbed Methane (CBM) Clearinghouse. The CBM Clearinghouse effort focused only on coalbed methane production within the Powder River Basin of northeast Wyoming. The CBM Clearinghouse demonstrated a need to expand the effort statewide with a comprehensive energy focus, including fossil fuels and renewable and alternative energy resources produced and/or developed in Wyoming. WERIC serves spatial data to the greater Wyoming geospatial community through the Wyoming GeoLibrary, the WyGISC Data Server and the Wyoming Energy Map. These applications are critical components that support the Wyoming Energy Resources Information Clearinghouse (WERIC). The Wyoming GeoLibrary is a tool for searching and browsing a central repository for metadata. It provides the ability to publish and maintain metadata and geospatial data in a distributed environment. The WyGISC Data Server is an internet mapping application that provides traditional GIS mapping and analysis functionality via the web. It is linked into various state and federal agency spatial data servers allowing users to visualize multiple themes, such as well locations and core sage grouse areas, in one domain. Additionally, this application gives users the ability to download any of the data being displayed within the web map. The Wyoming Energy Map is the newest mapping application developed directly from this effort. With over a 100 different layers accessible via this mapping application, it is the most comprehensive Wyoming energy mapping application available. This application also provides the public with the ability to create cultural and wildlife reports based on any location throughout Wyoming and at multiple scales. The WERIC website also allows users to access links to federal, state, and local natural resource agency websites and map servers; research documents about energy; and educational information, including information on upcoming energy-relate conferences. The WERIC website has seen significant use by energy industry consultants, land management agencies, state and local decision-makers, non-governmental organizations and the public. Continued service to these sectors is desirable but some challenges remain in keeping the WERIC site viable. The most pressing issue is finding the human and financial resources to keep the site continually updated. Initially, the concept included offering users the ability to maintain the site themselves; however, this has proven not to be a viable option since very few people contributed. Without user contributions, the web page relied on already committed university staff to publish and link to the appropriate documents and web-pages. An option that is currently being explored to address this issue is development of a partnership with the University of Wyoming, School of Energy Resources (SER). As part of their outreach program, SER may be able to contribute funding for a full-time position dedicated to maintenance of WERIC.« less

  8. The HydroShare Collaborative Repository for the Hydrology Community

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.

    2017-12-01

    HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of, and collaboration around, "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting our approach to making this system easy to use and serving the needs of the hydrology community represented by the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc. (CUAHSI). Metadata for uploaded files is harvested automatically or captured using easy to use web user interfaces. Users are encouraged to add or create resources in HydroShare early in the data life cycle. To encourage this we allow users to share and collaborate on HydroShare resources privately among individual users or groups, entering metadata while doing the work. HydroShare also provides enhanced functionality for users through web apps that provide tools and computational capability for actions on resources. HydroShare's architecture broadly is comprised of: (1) resource storage, (2) resource exploration website, and (3) web apps for actions on resources. System components are loosely coupled and interact through APIs, which enhances robustness, as components can be upgraded and advanced relatively independently. The full power of this paradigm is the extensibility it supports. Web apps are hosted on separate servers, which may be 3rd party servers. They are registered in HydroShare using a web app resource that configures the connectivity for them to be discovered and launched directly from resource types they are associated with.

  9. Document Concurrence System

    NASA Technical Reports Server (NTRS)

    Muhsin, Mansour; Walters, Ian

    2004-01-01

    The Document Concurrence System is a combination of software modules for routing users expressions of concurrence with documents. This system enables determination of the current status of concurrences and eliminates the need for the prior practice of manually delivering paper documents to all persons whose approvals were required. This system runs on a server, and participants gain access via personal computers equipped with Web-browser and electronic-mail software. A user can begin a concurrence routing process by logging onto an administration module, naming the approvers and stating the sequence for routing among them, and attaching documents. The server then sends a message to the first person on the list. Upon concurrence by the first person, the system sends a message to the second person, and so forth. A person on the list indicates approval, places the documents on hold, or indicates disapproval, via a Web-based module. When the last person on the list has concurred, a message is sent to the initiator, who can then finalize the process through the administration module. A background process running on the server identifies concurrence processes that are overdue and sends reminders to the appropriate persons.

  10. HARMONY: a server for the assessment of protein structures

    PubMed Central

    Pugalenthi, G.; Shameer, K.; Srinivasan, N.; Sowdhamini, R.

    2006-01-01

    Protein structure validation is an important step in computational modeling and structure determination. Stereochemical assessment of protein structures examine internal parameters such as bond lengths and Ramachandran (φ,ψ) angles. Gross structure prediction methods such as inverse folding procedure and structure determination especially at low resolution can sometimes give rise to models that are incorrect due to assignment of misfolds or mistracing of electron density maps. Such errors are not reflected as strain in internal parameters. HARMONY is a procedure that examines the compatibility between the sequence and the structure of a protein by assigning scores to individual residues and their amino acid exchange patterns after considering their local environments. Local environments are described by the backbone conformation, solvent accessibility and hydrogen bonding patterns. We are now providing HARMONY through a web server such that users can submit their protein structure files and, if required, the alignment of homologous sequences. Scores are mapped on the structure for subsequent examination that is useful to also recognize regions of possible local errors in protein structures. HARMONY server is located at PMID:16844999

  11. Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction

    PubMed Central

    Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng

    2015-01-01

    The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172

  12. An electronic thesaurus of Evidence Based Laboratory Medicine hematological and biochemical diagnostic tests.

    PubMed

    Dorizzi, R M; Maconi, M; Giavarina, D; Loza, G; Aman, M; Moreira, J; Bisoffi, Z; Gennuso, C

    2009-10-01

    The adoption of Evidence Based Laboratory Medicine (EBLM) has been hampered until today by the lack of effective tools. The SIMeL EBLM e-Thesaurus (on-line Repertoire of the diagnostic effectiveness of the laboratory, radiology and cardiology test) provides a useful support to clinical laboratory professionals and to clinicians for the interpretation of the diagnostic tests. The e-Thesaurus is an application developed using Microsoft Active Server Pages technology and carried out with Web Server Microsoft Internet Information Server and is available at the SIMeL website using a browser running JavaScript scripts (Internet Explorer is recommended). It contains a database (in Italian, English and Spanish) of the sensitivity and specificity (including the 95% confidence interval), the positive and negative likelihood ratios, the Diagnostic Odds Ratio and the Number Needed to Diagnose of more than 2000 diagnostic (most laboratory but also cardiology and radiology) tests. The e-Thesaurus improves the previous SIMeL paper and CD Thesaurus; its main features are a three languages search and a continuous and an easy updating capability.

  13. Ensembl variation resources

    PubMed Central

    2010-01-01

    Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org. PMID:20459805

  14. Risk Assessment of the Naval Postgraduate School Gigabit Network

    DTIC Science & Technology

    2004-09-01

    Management Server (1) • Ras Server (1) • Remedy Server (1) • Samba Server(2) • SQL Servers (3) • Web Servers (3) • WINS Server (1) • Library...Server Bob Sharp INCA Windows 2000 Advanced Server NPGS Landesk SQL 2000 Alan Pires eagle Microsoft Windows 2000 Advanced Server EWS NPGS Landesk...Advanced Server Special Projects NPGS SQL Alan Pires MC01BDB Microsoft Windows 2000 Advanced Server Special Projects NPGS SQL 2000 Alan Pires

  15. Pyglidein - A Simple HTCondor Glidein Service

    NASA Astrophysics Data System (ADS)

    Schultz, D.; Riedel, B.; Merino, G.

    2017-10-01

    A major challenge for data processing and analysis at the IceCube Neutrino Observatory presents itself in connecting a large set of individual clusters together to form a computing grid. Most of these clusters do not provide a “standard” grid interface. Using a local account on each submit machine, HTCondor glideins can be submitted to virtually any type of scheduler. The glideins then connect back to a main HTCondor pool, where jobs can run normally with no special syntax. To respond to dynamic load, a simple server advertises the number of idle jobs in the queue and the resources they request. The submit script can query this server to optimize glideins to what is needed, or not submit if there is no demand. Configuring HTCondor dynamic slots in the glideins allows us to efficiently handle varying memory requirements as well as whole-node jobs. One step of the IceCube simulation chain, photon propagation in the ice, heavily relies on GPUs for faster execution. Therefore, one important requirement for any workload management system in IceCube is to handle GPU resources properly. Within the pyglidein system, we have successfully configured HTCondor glideins to use any GPU allocated to it, with jobs using the standard HTCondor GPU syntax to request and use a GPU. This mechanism allows us to seamlessly integrate our local GPU cluster with remote non-Grid GPU clusters, including specially allocated resources at XSEDE supercomputers.

  16. Research Using In Vivo Simulation of Meta-Organizational Shared Decision Making (SDM). Task 3: Testing the Shared Decision Making Framework in Vivo

    DTIC Science & Technology

    2011-12-01

    developed to address the two main research questions (see Annex A). Exact wording of the questions varied during interviews to accommodate the...centre at DMS 3rd floor. All electronic files (including digital audio and video recordings) with participant data are being encrypted and password...locked filing cabinet at the University of Ottawa. Electronic files will remain encrypted, password protected and stored on a server to which only the

  17. Electronic Information Management for PfP Nations (La gestion electronique des informations pour les pays du PfP)

    DTIC Science & Technology

    2003-04-01

    such repositories containing electronic information sources that can be used for academic research. The Los Alamos Physics Archive, providing access to...Pinfield, Gardner and MacColl. 2002). The first e-print server was the Los Alamos Physics Archive, presently known as arXiv.org, which was created in 1991...by Ginsparg (Ginsparg 1996; Luce 2001; McKiernan 2000) at the Los Alamos National Laboratory, to give access to pre-prints in the domain of high

  18. search.bioPreprint: a discovery tool for cutting edge, preprint biomedical research articles

    PubMed Central

    Iwema, Carrie L.; LaDue, John; Zack, Angela; Chattopadhyay, Ansuman

    2016-01-01

    The time it takes for a completed manuscript to be published traditionally can be extremely lengthy. Article publication delay, which occurs in part due to constraints associated with peer review, can prevent the timely dissemination of critical and actionable data associated with new information on rare diseases or developing health concerns such as Zika virus. Preprint servers are open access online repositories housing preprint research articles that enable authors (1) to make their research immediately and freely available and (2) to receive commentary and peer review prior to journal submission. There is a growing movement of preprint advocates aiming to change the current journal publication and peer review system, proposing that preprints catalyze biomedical discovery, support career advancement, and improve scientific communication. While the number of articles submitted to and hosted by preprint servers are gradually increasing, there has been no simple way to identify biomedical research published in a preprint format, as they are not typically indexed and are only discoverable by directly searching the specific preprint server websites. To address this issue, we created a search engine that quickly compiles preprints from disparate host repositories and provides a one-stop search solution. Additionally, we developed a web application that bolsters the discovery of preprints by enabling each and every word or phrase appearing on any web site to be integrated with articles from preprint servers. This tool, search.bioPreprint, is publicly available at http://www.hsls.pitt.edu/resources/preprint. PMID:27508060

  19. U.S. Level III and IV Ecoregions (U.S. EPA)

    EPA Pesticide Factsheets

    This map service displays Level III and Level IV Ecoregions of the United States and was created from ecoregion data obtained from the U.S. Environmental Protection Agency Office of Research and Development's Western Ecology Division. The original ecoregion data was projected from Albers to Web Mercator for this map service. To download shapefiles of ecoregion data (in Albers), please go to: ftp://newftp.epa.gov/EPADataCommons/ORD/Ecoregions/. IMPORTANT NOTE ABOUT LEVEL IV POLYGON LEGEND DISPLAY IN ARCMAP: Due to the limitations of Graphical Device Interface (GDI) resources per application on Windows, ArcMap does not display the legend in the Table of Contents for the ArcGIS Server service layer if the legend has more than 100 items. As of December 2011, there are 968 unique legend items in the Level IV Ecoregion Polygon legend. Follow this link (http://support.esri.com/en/knowledgebase/techarticles/detail/33741) for instructions about how to increase the maximum number of ArcGIS Server service layer legend items allowed for display in ArcMap. Note the instructions at this link provide a slightly incorrect path to Maximum Legend Count. The correct path is HKEY_CURRENT_USER > Software > ESRI > ArcMap > Server > MapServerLayer > Maximum Legend Count. When editing the Maximum Legend Count, update the field, Value data to 1000. To download a PDF version of the Level IV ecoregion map and legend, go to ftp://newftp.epa.gov/EPADataCommons/ORD/Ecoregions/us/Eco_Level_IV

  20. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  1. TogoDoc server/client system: smart recommendation and efficient management of life science literature.

    PubMed

    Iwasaki, Wataru; Yamamoto, Yasunori; Takagi, Toshihisa

    2010-12-13

    In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.

  2. TogoDoc Server/Client System: Smart Recommendation and Efficient Management of Life Science Literature

    PubMed Central

    Takagi, Toshihisa

    2010-01-01

    In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the “tsunami” of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom. PMID:21179453

  3. TIPMaP: a web server to establish transcript isoform profiles from reliable microarray probes.

    PubMed

    Chitturi, Neelima; Balagannavar, Govindkumar; Chandrashekar, Darshan S; Abinaya, Sadashivam; Srini, Vasan S; Acharya, Kshitish K

    2013-12-27

    Standard 3' Affymetrix gene expression arrays have contributed a significantly higher volume of existing gene expression data than other microarray platforms. These arrays were designed to identify differentially expressed genes, but not their alternatively spliced transcript forms. No resource can currently identify expression pattern of specific mRNA forms using these microarray data, even though it is possible to do this. We report a web server for expression profiling of alternatively spliced transcripts using microarray data sets from 31 standard 3' Affymetrix arrays for human, mouse and rat species. The tool has been experimentally validated for mRNAs transcribed or not-detected in a human disease condition (non-obstructive azoospermia, a male infertility condition). About 4000 gene expression datasets were downloaded from a public repository. 'Good probes' with complete coverage and identity to latest reference transcript sequences were first identified. Using them, 'Transcript specific probe-clusters' were derived for each platform and used to identify expression status of possible transcripts. The web server can lead the user to datasets corresponding to specific tissues, conditions via identifiers of the microarray studies or hybridizations, keywords, official gene symbols or reference transcript identifiers. It can identify, in the tissues and conditions of interest, about 40% of known transcripts as 'transcribed', 'not-detected' or 'differentially regulated'. Corresponding additional information for probes, genes, transcripts and proteins can be viewed too. We identified the expression of transcripts in a specific clinical condition and validated a few of these transcripts by experiments (using reverse transcription followed by polymerase chain reaction). The experimental observations indicated higher agreements with the web server results, than contradictions. The tool is accessible at http://resource.ibab.ac.in/TIPMaP. The newly developed online tool forms a reliable means for identification of alternatively spliced transcript-isoforms that may be differentially expressed in various tissues, cell types or physiological conditions. Thus, by making better use of existing data, TIPMaP avoids the dependence on precious tissue-samples, in experiments with a goal to establish expression profiles of alternative splice forms--at least in some cases.

  4. 78 FR 308 - Medicare Program; Request for Information on Hospital and Vendor Readiness for Electronic Health...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-03

    ... incentive payments to eligible professionals and eligible hospitals when they adopt and meaningfully use... meaningful use. More than 120,000 eligible health care professionals and more than 3,300 hospitals have... Hospital IQR Program ( http://www.qualitynet.org/dcs/ ContentServer?cid=113811 5987129&pagename=Qnet Public...

  5. 78 FR 57648 - Notice of Issuance of Final Determination Concerning Video Teleconferencing Server

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-19

    ... the Chinese- origin Video Board and the Filter Board, impart the essential character to the video... includes the codec; a network filter electronic circuit board (``Filter Board''); a housing case; a power... (``Linux software''). The Linux software allows the Filter Board to inspect each Ethernet packet of...

  6. Adult Literacy, the Internet, and NCAL: An Introduction.

    ERIC Educational Resources Information Center

    Rethemeyer, R. Karl

    This document provides information on two services established on the Internet by the National Center on Adult Literacy (NCAL): electronic mail (e-mail) communication with NCAL and a Gopher server that makes it possible to access and download information, documents, and software relevant to adult literacy. A recent "NCAL Connections" article,…

  7. GPO Gate: University of California, San Diego's New Gateway to Electronic Government Information.

    ERIC Educational Resources Information Center

    Cruse, Patricia; Jahns, Cynthia

    1996-01-01

    Describes the development of a new interface called GPO Gate for accessing the Government Printing Office (GPO) WAIS (wide area information server) databases, GPO Access. Highlights include development and use of GPO Gate at the University of California, San Diego, and implications for public service. (Author/LRW)

  8. An Innovative Improvement of Engineering Learning System Using Computational Fluid Dynamics Concept

    ERIC Educational Resources Information Center

    Hung, T. C.; Wang, S. K.; Tai, S. W.; Hung, C. T.

    2007-01-01

    An innovative concept of an electronic learning system has been established in an attempt to achieve a technology that provides engineering students with an instructive and affordable framework for learning engineering-related courses. This system utilizes an existing Computational Fluid Dynamics (CFD) package, Active Server Pages programming,…

  9. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    PubMed Central

    Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable. PMID:24883353

  10. A novel resource management method of providing operating system as a service for mobile transparent computing.

    PubMed

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  11. An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment

    PubMed Central

    Muthurajan, Vinothkumar; Narayanasamy, Balaji

    2016-01-01

    Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation. PMID:26981584

  12. An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment.

    PubMed

    Muthurajan, Vinothkumar; Narayanasamy, Balaji

    2016-01-01

    Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation.

  13. COMAN: a web server for comprehensive metatranscriptomics analysis.

    PubMed

    Ni, Yueqiong; Li, Jun; Panagiotou, Gianni

    2016-08-11

    Microbiota-oriented studies based on metagenomic or metatranscriptomic sequencing have revolutionised our understanding on microbial ecology and the roles of both clinical and environmental microbes. The analysis of massive metatranscriptomic data requires extensive computational resources, a collection of bioinformatics tools and expertise in programming. We developed COMAN (Comprehensive Metatranscriptomics Analysis), a web-based tool dedicated to automatically and comprehensively analysing metatranscriptomic data. COMAN pipeline includes quality control of raw reads, removal of reads derived from non-coding RNA, followed by functional annotation, comparative statistical analysis, pathway enrichment analysis, co-expression network analysis and high-quality visualisation. The essential data generated by COMAN are also provided in tabular format for additional analysis and integration with other software. The web server has an easy-to-use interface and detailed instructions, and is freely available at http://sbb.hku.hk/COMAN/ CONCLUSIONS: COMAN is an integrated web server dedicated to comprehensive functional analysis of metatranscriptomic data, translating massive amount of reads to data tables and high-standard figures. It is expected to facilitate the researchers with less expertise in bioinformatics in answering microbiota-related biological questions and to increase the accessibility and interpretation of microbiota RNA-Seq data.

  14. SeWeR: a customizable and integrated dynamic HTML interface to bioinformatics services.

    PubMed

    Basu, M K

    2001-06-01

    Sequence analysis using Web Resources (SeWeR) is an integrated, Dynamic HTML (DHTML) interface to commonly used bioinformatics services available on the World Wide Web. It is highly customizable, extendable, platform neutral, completely server-independent and can be hosted as a web page as well as being used as stand-alone software running within a web browser.

  15. An Enhanced Z39.50 Gateway to the WorldWideWeb.

    ERIC Educational Resources Information Center

    Cunningham, David; Sloan, Stephen

    1994-01-01

    Describes how a university library uses the WorldWideWeb (WWW) to enable users to access resources mounted on a local Z39.50 server and to order prints from articles stored on a CD-ROM jukebox. The software used in the construction of the system, necessary modifications to the software, and software ordering information are covered. (KRN)

  16. Kalai-Smorodinsky bargaining solution for optimal resource allocation over wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2012-01-01

    Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.

  17. MetaStorm: A Public Resource for Customizable Metagenomics Annotation

    PubMed Central

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S.; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution. PMID:27632579

  18. MetaStorm: A Public Resource for Customizable Metagenomics Annotation.

    PubMed

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution.

  19. Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Fitzsimons, Joseph; Kashefi, Elham

    2012-02-01

    Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's inputs, outputs and computation remain private. Recently we proposed a universal unconditionally secure BQC scheme, based on the conceptual framework of the measurement-based quantum computing model, where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Here we present a refinement of the scheme which vastly expands the class of quantum circuits which can be directly implemented as a blind computation, by introducing a new class of resource states which we term dotted-complete graph states and expanding the set of single qubit states the client is required to prepare. These two modifications significantly simplify the overall protocol and remove the previously present restriction that only nearest-neighbor circuits could be implemented as blind computations directly. As an added benefit, the refined protocol admits a substantially more intuitive and simplified verification mechanism, allowing the correctness of a blind computation to be verified with arbitrarily small probability of error.

  20. Framework for analysis of guaranteed QOS systems

    NASA Astrophysics Data System (ADS)

    Chaudhry, Shailender; Choudhary, Alok

    1997-01-01

    Multimedia data is isochronous in nature and entails managing and delivering high volumes of data. Multiprocessors with their large processing power, vast memory, and fast interconnects, are an ideal candidate for the implementation of multimedia applications. Initially, multiprocessors were designed to execute scientific programs and thus their architecture was optimized to provide low message latency and efficiently support regular communication patterns. Hence, they have a regular network topology and most use wormhole routing. The design offers the benefits of a simple router, small buffer size, and network latency that is almost independent of path length. Among the various multimedia applications, video on demand (VOD) server is well-suited for implementation using parallel multiprocessors. Logical models for VOD servers are presently mapped onto multiprocessors. Our paper provides a framework for calculating bounds on utilization of system resources with which QoS parameters for each isochronous stream can be guaranteed. Effects of the architecture of multiprocessors, and efficiency of various local models and mapping on particular architectures can be investigated within our framework. Our framework is based on rigorous proofs and provides tight bounds. The results obtained may be used as the basis for admission control tests. To illustrate the versatility of our framework, we provide bounds on utilization for various logical models applied to mesh connected architectures for a video on demand server. Our results show that worm hole routing can lead to packets waiting for transmission of other packets that apparently share no common resources. This situation is analogous to head-of-the-line blocking. We find that the provision of multiple VCs per link and multiple flit buffers improves utilization (even under guaranteed QoS parameters). This analogous to parallel iterative matching.

  1. "MedTRIS" (Medical Triage and Registration Informatics System): A Web-based Client Server System for the Registration of Patients Being Treated in First Aid Posts at Public Events and Mass Gatherings.

    PubMed

    Gogaert, Stefan; Vande Veegaete, Axel; Scholliers, Annelies; Vandekerckhove, Philippe

    2016-10-01

    First aid (FA) services are provisioned on-site as a preventive measure at most public events. In Flanders, Belgium, the Belgian Red Cross-Flanders (BRCF) is the major provider of these FA services with volunteers being deployed at approximately 10,000 public events annually. The BRCF has systematically registered information on the patients being treated in FA posts at major events and mass gatherings during the last 10 years. This information has been collected in a web-based client server system called "MedTRIS" (Medical Triage and Registration Informatics System). MedTRIS contains data on more than 200,000 patients at 335 mass events. This report describes the MedTRIS architecture, the data collected, and how the system operates in the field. This database consolidates different types of information with regards to FA interventions in a standardized way for a variety of public events. MedTRIS allows close monitoring in "real time" of the situation at mass gatherings and immediate intervention, when necessary; allows more accurate prediction of resources needed; allows to validate conceptual and predictive models for medical resources at (mass) public events; and can contribute to the definition of a standardized minimum data set (MDS) for mass-gathering health research and evaluation. Gogaert S , Vande veegaete A , Scholliers A , Vandekerckhove P . "MedTRIS" (Medical Triage and Registration Informatics System): a web-based client server system for the registration of patients being treated in first aid posts at public events and mass gatherings. Prehosp Disaster Med. 2016;31(5):557-562.

  2. Interconnecting smartphone, image analysis server, and case report forms in clinical trials for automatic skin lesion tracking in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.

    2016-03-01

    Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.

  3. "One-Stop Shopping" for Ocean Remote-Sensing and Model Data

    NASA Technical Reports Server (NTRS)

    Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook

    2006-01-01

    OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for

  4. DXplain on the Internet.

    PubMed

    Barnett, G O; Famiglietti, K T; Kim, R J; Hoffer, E P; Feldman, M J

    1998-01-01

    DXplain, a computer-based medical education, reference and decision support system has been used by thousands of physicians and medical students on stand-alone systems and over communications networks. For the past two years, we have made DXplain available over the Internet in order to provide DXplain's knowledge and analytical capabilities as a resource to other applications within Massachusetts General Hospital (MGH) and at outside institutions. We describe and provide the user experience with two different protocols through which users can access DXplain through the World Wide Web (WWW). The first allows the user to have direct interaction with all the functionality of DXplain where the MGH server controls the interaction and the mode of presentation. In the second mode, the MGH server provides the DXplain functionality as a series of services, which can be called independently by the user application program.

  5. Optimal routing of IP packets to multi-homed servers

    NASA Astrophysics Data System (ADS)

    Swartz, K. L.

    1992-08-01

    Multi-homing, or direct attachment to multiple networks, offers both performance and availability benefits for important servers on busy networks. Exploiting these benefits to their fullest requires a modicum of routing knowledge in the clients. Careful policy control must also be reflected in the routing used within the network to make best use of specialized and often scarce resources. While relatively straightforward in theory, this problem becomes much more difficult to solve in a real network containing often intractable implementations from a variety of vendors. This paper presents an analysis of the problem and proposes a useful solution for a typical campus network. Application of this solution at the Stanford Linear Accelerator Center is studied and the problems and pitfalls encountered are discussed, as are the workarounds used to make the system work in the real world.

  6. Integrating Wireless Networking for Radiation Detection

    NASA Astrophysics Data System (ADS)

    Board, Jeremy; Barzilov, Alexander; Womble, Phillip; Paschal, Jon

    2006-10-01

    As wireless networking becomes more available, new applications are being developed for this technology. Our group has been studying the advantages of wireless networks of radiation detectors. With the prevalence of the IEEE 802.11 standard (``WiFi''), we have developed a wireless detector unit which is comprised of a 5 cm x 5 cm NaI(Tl) detector, amplifier and data acquisition electronics, and a WiFi transceiver. A server may communicate with the detector unit using a TCP/IP network connected to a WiFi access point. Special software on the server will perform radioactive isotope determination and estimate dose-rates. We are developing an enhanced version of the software which utilizes the receiver signal strength index (RSSI) to estimate source strengths and to create maps of radiation intensity.

  7. Resource for structure related information on transmembrane proteins

    NASA Astrophysics Data System (ADS)

    Tusnády, Gábor E.; Simon, István

    Transmembrane proteins are involved in a wide variety of vital biological processes including transport of water-soluble molecules, flow of information and energy production. Despite significant efforts to determine the structures of these proteins, only a few thousand solved structures are known so far. Here, we review the various resources for structure-related information on these types of proteins ranging from the 3D structure to the topology and from the up-to-date databases to the various Internet sites and servers dealing with structure prediction and structure analysis. Abbreviations: 3D, three dimensional; PDB, Protein Data Bank; TMP, transmembrane protein.

  8. Data Center Consolidation: A Step towards Infrastructure Clouds

    NASA Astrophysics Data System (ADS)

    Winter, Markus

    Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.

  9. System comprising interchangeable electronic controllers and corresponding methods

    NASA Technical Reports Server (NTRS)

    Steele, Glen F. (Inventor); Salazar, George A. (Inventor)

    2009-01-01

    A system comprising an interchangeable electronic controller is provided with programming that allows the controller to adapt a behavior that is dependent upon the particular type of function performed by a system or subsystem component. The system reconfigures the controller when the controller is moved from one group of subsystem components to another. A plurality of application programs are provided by a server from which the application program for a particular electronic controller is selected. The selection is based on criteria such as a subsystem component group identifier that identifies the particular type of function associated with the system or subsystem group of components.

  10. High-speed network for delivery of education-on-demand

    NASA Astrophysics Data System (ADS)

    Cordero, Carlos; Harris, Dale; Hsieh, Jeff

    1996-03-01

    A project to investigate the feasibility of delivering on-demand distance education to the desktop, known as the Asynchronous Distance Education ProjecT (ADEPT), is presently being carried out. A set of Stanford engineering classes is digitized on PC, Macintosh, and UNIX platforms, and is made available on servers. Students on campus and in industry may then access class material on these servers via local and metropolitan area networks. Students can download class video and audio, encoded in QuickTimeTM and Show-Me TVTM formats, via file-transfer protocol or the World Wide Web. Alternatively, they may stream a vector-quantized version of the class directly from a server for real-time playback. Students may also download PostscriptTM and Adobe AcrobatTM versions of class notes. Off-campus students may connect to ADEPT servers via the internet, the Silicon Valley Test Track (SVTT), or the Bay-Area Gigabit Network (BAGNet). The SVTT and BAGNet are high-speed metropolitan-area networks, spanning the Bay Area, which provide IP access over asynchronous transfer mode (ATM). Student interaction is encouraged through news groups, electronic mailing lists, and an ADEPT home page. Issues related to having multiple platforms and interoperability are examined in this paper. The ramifications of providing a reliable service are discussed. System performance and the parameters that affect it are then described. Finally, future work on expanding ATM access, real-time delivery of classes, and enhanced student interaction is described.

  11. 37 CFR 360.13 - Compliance with statutory dates.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... shall be considered timely filed only if: (1) They are received online in the Board's server no later than 5 p.m. e.d.t. on July 31. Online claims must be filed through the Copyright Royalty Board Web site... online through the Board's Web site or the electronic mail message from the Board confirming receipt of...

  12. 37 CFR 360.4 - Compliance with statutory dates.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... shall be considered timely filed only if: (1) They are received online in the Board's server no later than 5 p.m. E.D.T. on July 31. Online claims must be filed through the Copyright Royalty Board Web site... online through the Board's Web site or the electronic mail message from the Board confirming receipt of...

  13. www.teld.net: Online Courseware Engine for Teaching by Examples and Learning by Doing.

    ERIC Educational Resources Information Center

    Huang, G. Q.; Shen, B.; Mak, K. L.

    2001-01-01

    Describes TELD (Teaching by Examples and Learning by Doing), a Web-based online courseware engine for higher education. Topics include problem-based learning; project-based learning; case methods; TELD as a Web server; course materials; TELD as a search engine; and TELD as an online virtual classroom for electronic delivery of electronic…

  14. EarthExplorer

    USGS Publications Warehouse

    Houska, Treva

    2012-01-01

    The EarthExplorer trifold provides basic information for on-line access to remotely-sensed data from the U.S. Geological Survey Earth Resources Observation and Science (EROS) Center archive. The EarthExplorer (http://earthexplorer.usgs.gov/) client/server interface allows users to search and download aerial photography, satellite data, elevation data, land-cover products, and digitized maps. Minimum computer system requirements and customer service contact information also are included in the brochure.

  15. Analysis of Cloud-Based Database Systems

    DTIC Science & Technology

    2015-06-01

    EU) citizens under the Patriot Act [3]. Unforeseen virtualization bugs have caused wide-reaching outages [4], leaving customers helpless to assist...collected from SQL Server Profiler traces. We analyze the trace results captured from our test bed both before and after increasing system resources...cloud test- bed . A. DATA COLLECTION, PARSING, AND ORGANIZATION Once we finished collecting the trace data, we knew we needed to have as close a

  16. A Laboratory for Characterizing the Efficacy of Moving Target Defense

    DTIC Science & Technology

    2016-10-25

    of William and Mary are developing a scalable, dynamic, adaptive security system that combines virtualization , emulation, and mutable network...goal with the resource constraints of a small number of servers, and making virtual nodes “real enough” from the view of attackers. Unfortunately, with...we at College of William and Mary are developing a scalable, dynamic, adaptive security system that combines virtualization , emulation, and mutable

  17. HASP server: a database and structural visualization platform for comparative models of influenza A hemagglutinin proteins.

    PubMed

    Ambroggio, Xavier I; Dommer, Jennifer; Gopalan, Vivek; Dunham, Eleca J; Taubenberger, Jeffery K; Hurt, Darrell E

    2013-06-18

    Influenza A viruses possess RNA genomes that mutate frequently in response to immune pressures. The mutations in the hemagglutinin genes are particularly significant, as the hemagglutinin proteins mediate attachment and fusion to host cells, thereby influencing viral pathogenicity and species specificity. Large-scale influenza A genome sequencing efforts have been ongoing to understand past epidemics and pandemics and anticipate future outbreaks. Sequencing efforts thus far have generated nearly 9,000 distinct hemagglutinin amino acid sequences. Comparative models for all publicly available influenza A hemagglutinin protein sequences (8,769 to date) were generated using the Rosetta modeling suite. The C-alpha root mean square deviations between a randomly chosen test set of models and their crystallographic templates were less than 2 Å, suggesting that the modeling protocols yielded high-quality results. The models were compiled into an online resource, the Hemagglutinin Structure Prediction (HASP) server. The HASP server was designed as a scientific tool for researchers to visualize hemagglutinin protein sequences of interest in a three-dimensional context. With a built-in molecular viewer, hemagglutinin models can be compared side-by-side and navigated by a corresponding sequence alignment. The models and alignments can be downloaded for offline use and further analysis. The modeling protocols used in the HASP server scale well for large amounts of sequences and will keep pace with expanded sequencing efforts. The conservative approach to modeling and the intuitive search and visualization interfaces allow researchers to quickly analyze hemagglutinin sequences of interest in the context of the most highly related experimental structures, and allow them to directly compare hemagglutinin sequences to each other simultaneously in their two- and three-dimensional contexts. The models and methodology have shown utility in current research efforts and the ongoing aim of the HASP server is to continue to accelerate influenza A research and have a positive impact on global public health.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, K; Freeman, J; Zavalkovskiy, B

    Purpose: Situated 20 miles from 5 major fault lines in California’s Bay Area, Stanford Universityhas a critical need for IT infrastructure planning to handle the high probability devastating earthquakes. Recently, a multi-million dollar project has been underway to overhaul Stanford’s radiation oncology information systems, maximizing planning system performance and providing true disaster recovery abilities. An overview of the project will be given with particular focus on lessons learned throughout the build. Methods: In this implementation, two isolated external datacenters provide geographical redundancy to Stanford’s main campus datacenter. Real-time mirroring is made of all data stored to our serial attached networkmore » (SAN) storage. In each datacenter, hardware/software virtualization was heavily implemented to maximize server efficiency and provide a robust mechanism to seamlessly migrate users in the event of an earthquake. System performance is routinely assessed through the use of virtualized data robots, able to log in to the system at scheduled times, perform routine planning tasks and report timing results to a performance dashboard. A substantial dose calculation framework (608 CPU cores) has been constructed as part of the implementation. Results: Migration to a virtualized server environment with a high performance SAN has resulted in up to a 45% speed up of common treatment planning tasks. Switching to a 608 core DCF has resulted in a 280% speed increase in dose calculations. Server tuning was found to further improved read/write performance by 20%. Disaster recovery tests are carried out quarterly and, although successful, remain time consuming to perform and verify functionality. Conclusion: Achieving true disaster recovery capabilities is possible through server virtualization, support from skilled IT staff and leadership. Substantial performance improvements are also achievable through careful tuning of server resources and disk read/write operations. Developing a streamlined method to comprehensively test failover is a key requirement to the system’s success.« less

  19. New implementation of OGC Web Processing Service in Python programming language. PyWPS-4 and issues we are facing with processing of large raster data using OGC WPS

    NASA Astrophysics Data System (ADS)

    Čepický, Jáchym; Moreira de Sousa, Luís

    2016-06-01

    The OGC® Web Processing Service (WPS) Interface Standard provides rules for standardizing inputs and outputs (requests and responses) for geospatial processing services, such as polygon overlay. The standard also defines how a client can request the execution of a process, and how the output from the process is handled. It defines an interface that facilitates publishing of geospatial processes and client discovery of processes and and binding to those processes into workflows. Data required by a WPS can be delivered across a network or they can be available at a server. PyWPS was one of the first implementations of OGC WPS on the server side. It is written in the Python programming language and it tries to connect to all existing tools for geospatial data analysis, available on the Python platform. During the last two years, the PyWPS development team has written a new version (called PyWPS-4) completely from scratch. The analysis of large raster datasets poses several technical issues in implementing the WPS standard. The data format has to be defined and validated on the server side and binary data have to be encoded using some numeric representation. Pulling raster data from remote servers introduces security risks, in addition, running several processes in parallel has to be possible, so that system resources are used efficiently while preserving security. Here we discuss these topics and illustrate some of the solutions adopted within the PyWPS implementation.

  20. Electronic Derivative Classifier/Reviewing Official

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Joshua C; McDuffie, Gregory P; Light, Ken L

    2017-02-17

    The electronic Derivative Classifier, Reviewing Official (eDC/RO) is a web based document management and routing system that reduces security risks and increases workflow efficiencies. The system automates the upload, notification review request, and document status tracking of documents for classification review on a secure server. It supports a variety of document formats (i.e., pdf, doc, docx, xls, xlsx, xlsm, ppt, pptx, vsd, vsdx and txt), and allows for the dynamic placement of classification markings such as the classification level, category and caveats on the document, in addition to a document footer and digital signature.

  1. Next Generation Transport Phenomenology Model

    NASA Technical Reports Server (NTRS)

    Strickland, Douglas J.; Knight, Harold; Evans, J. Scott

    2004-01-01

    This report describes the progress made in Quarter 3 of Contract Year 3 on the development of Aeronomy Phenomenology Modeling Tool (APMT), an open-source, component-based, client-server architecture for distributed modeling, analysis, and simulation activities focused on electron and photon transport for general atmospheres. In the past quarter, column emission rate computations were implemented in Java, preexisting Fortran programs for computing synthetic spectra were embedded into APMT through Java wrappers, and work began on a web-based user interface for setting input parameters and running the photoelectron and auroral electron transport models.

  2. BIM-Based E-Procurement: An Innovative Approach to Construction E-Procurement

    PubMed Central

    2015-01-01

    This paper presents an innovative approach to e-procurement in construction, which uses building information models (BIM) to support the construction procurement process. The result is an integrated and electronic instrument connected to a rich knowledge base capable of advanced operations and able to strengthen transaction relationships and collaboration throughout the supply chain. The BIM-based e-procurement prototype has been developed using distinct existing electronic solutions and an IFC server and was tested in a pilot case study, which supported further discussions of the results of the research. PMID:26090518

  3. Home media server content management

    NASA Astrophysics Data System (ADS)

    Tokmakoff, Andrew A.; van Vliet, Harry

    2001-07-01

    With the advent of set-top boxes, the convergence of TV (broadcasting) and PC (Internet) is set to enter the home environment. Currently, a great deal of activity is occurring in developing standards (TV-Anytime Forum) and devices (TiVo) for local storage on Home Media Servers (HMS). These devices lie at the heart of convergence of the triad: communications/networks - content/media - computing/software. Besides massive storage capacity and being a communications 'gateway', the home media server is characterised by the ability to handle metadata and software that provides an easy to use on-screen interface and intelligent search/content handling facilities. In this paper, we describe a research prototype HMS that is being developed within the GigaCE project at the Telematica Instituut . Our prototype demonstrates advanced search and retrieval (video browsing), adaptive user profiling and an innovative 3D component of the Electronic Program Guide (EPG) which represents online presence. We discuss the use of MPEG-7 for representing metadata, the use of MPEG-21 working draft standards for content identification, description and rights expression, and the use of HMS peer-to-peer content distribution approaches. Finally, we outline explorative user behaviour experiments that aim to investigate the effectiveness of the prototype HMS during development.

  4. Nomadic migration : a service environment for autonomic computing on the Grid

    NASA Astrophysics Data System (ADS)

    Lanfermann, Gerd

    2003-06-01

    In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment. In den vergangenen Jahren ist es zu einer dramatischen Vervielfachung der verfügbaren Rechenzeit gekommen. Diese 'Grid Ressourcen' stehen jedoch nicht als kontinuierlicher Strom zur Verfügung, sondern sind über verschiedene Maschinentypen, Plattformen und Betriebssysteme verteilt, die jeweils durch Netzwerke mit fluktuierender Bandbreite verbunden sind. Es wird für Wissenschaftler zunehmend schwieriger, die verfügbaren Ressourcen für ihre Anwendungen zu nutzen. Wir glauben, dass intelligente, selbstbestimmende Applikationen in der Lage sein sollten, ihre Ressourcen in einer dynamischen und heterogenen Umgebung selbst zu wählen: Migrierende Applikationen suchen eine neue Ressource, wenn die alte aufgebraucht ist. 'Spawning'-Anwendungen lassen Algorithmen auf externen Maschinen laufen, um die Hauptanwendung zu beschleunigen. Applikationen werden neu gestartet, sobald ein Absturz endeckt wird. Alle diese Verfahren können ohne menschliche Interaktion erfolgen. Eine verteilte Rechenumgebung besitzt eine natürliche Unverlässlichkeit. Jede Applikation, die mit einer solchen Umgebung interagiert, muss auf die gestörten Komponenten reagieren können: schlechte Netzwerkverbindung, abstürzende Maschinen, fehlerhafte Software. Wir konstruieren eine verlässliche Serviceinfrastruktur, indem wir der Serviceumgebung eine 'Peer-to-Peer'-Topology aufprägen. Diese “Grid Peer Service” Infrastruktur beinhaltet Services wie Migration und Spawning, als auch Services zum Starten von Applikationen, zur Dateiübertragung und Auswahl von Rechenressourcen. Sie benutzt existierende Gridtechnologie wo immer möglich, um ihre Aufgabe durchzuführen. Ein Applikations-Information- Server arbeitet als generische Registratur für alle Teilnehmer in der Serviceumgebung. Die Serviceumgebung, die wir entwickelt haben, erlaubt es Applikationen z.B. eine Relokationsanfrage an einen Migrationsserver zu stellen. Der Server sucht einen neuen Computer, basierend auf den übermittelten Ressourcen-Anforderungen. Er transferiert den Statusfile des Applikation zu der neuen Maschine und startet die Applikation neu. Obwohl das umgebende Ressourcensubstrat nicht kontinuierlich ist, können wir kontinuierliche Berechnungen auf Grids ausführen, indem wir die Applikation migrieren. Wir zeigen mit realistischen Beispielen, wie sich z.B. ein traditionelles Genom-Analyse-Programm leicht modifizieren lässt, um selbstbestimmte Migrationen in dieser Serviceumgebung durchzuführen.

  5. Hardware Assisted Stealthy Diversity (CHECKMATE)

    DTIC Science & Technology

    2013-09-01

    applicable across multiple architectures. Figure 29 shows an example an attack against an interpreted environment with a Java executable. CHECKMATE can...Architectures ARM PPCx86 Java VM Java VMJava VM Java Executable Attack APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 33 a user executes “/usr/bin/wget...Server 1 - Administration Server 2 – Database ( mySQL ) Server 3 – Web server (Mongoose) Server 4 – File server (SSH) Server 5 – Email server

  6. An infrastructure for ontology-based information systems in biomedicine: RICORDO case study.

    PubMed

    Wimalaratne, Sarala M; Grenon, Pierre; Hoehndorf, Robert; Gkoutos, Georgios V; de Bono, Bernard

    2012-02-01

    The article presents an infrastructure for supporting the semantic interoperability of biomedical resources based on the management (storing and inference-based querying) of their ontology-based annotations. This infrastructure consists of: (i) a repository to store and query ontology-based annotations; (ii) a knowledge base server with an inference engine to support the storage of and reasoning over ontologies used in the annotation of resources; (iii) a set of applications and services allowing interaction with the integrated repository and knowledge base. The infrastructure is being prototyped and developed and evaluated by the RICORDO project in support of the knowledge management of biomedical resources, including physiology and pharmacology models and associated clinical data. The RICORDO toolkit and its source code are freely available from http://ricordo.eu/relevant-resources. sarala@ebi.ac.uk.

  7. On the Selection of Models for Runtime Prediction of System Resources

    NASA Astrophysics Data System (ADS)

    Casolari, Sara; Colajanni, Michele

    Applications and services delivered through large Internet Data Centers are now feasible thanks to network and server improvement, but also to virtualization, dynamic allocation of resources and dynamic migrations. The large number of servers and resources involved in these systems requires autonomic management strategies because no amount of human administrators would be capable of cloning and migrating virtual machines in time, as well as re-distributing or re-mapping the underlying hardware. At the basis of most autonomic management decisions, there is the need of evaluating own global behavior and change it when the evaluation indicates that they are not accomplishing what they were intended to do or some relevant anomalies are occurring. Decisions algorithms have to satisfy different time scales constraints. In this chapter we are interested to short-term contexts where runtime prediction models work on the basis of time series coming from samples of monitored system resources, such as disk, CPU and network utilization. In similar environments, we have to address two main issues. First, original time series are affected by limited predictability because measurements are characterized by noises due to system instability, variable offered load, heavy-tailed distributions, hardware and software interactions. Moreover, there is no existing criteria that can help us to choose a suitable prediction model and related parameters with the purpose of guaranteeing an adequate prediction quality. In this chapter, we evaluate the impact that different choices on prediction models have on different time series, and we suggest how to treat input data and whether it is convenient to choose the parameters of a prediction model in a static or dynamic way. Our conclusions are supported by a large set of analyses on realistic and synthetic data traces.

  8. Astronomy: On the Bleeding Edge of Scholarly Infrastructure

    NASA Astrophysics Data System (ADS)

    Borgman, Christine; Sands, A.; Wynholds, L. A.

    2013-01-01

    The infrastructure for scholarship has moved online, making data, articles, papers, journals, catalogs, and other scholarly resources nodes in a deeply interconnected network. Astronomy has led the way on several fronts, developing tools such as ADS to provide unified access to astronomical publications and reaching agreement on a common data file formats such as FITS. Astronomy also was among the first fields to establish open access to substantial amounts of observational data. We report on the first three years of a long-term research project to study knowledge infrastructures in astronomy, funded by the NSF and the Alfred P. Sloan Foundation. Early findings indicate that the availability and use of networked technologies for integrating scholarly resources varies widely within astronomy. Substantial differences arise in the management of data between ground-based and space-based missions and between subfields of astronomy, for example. While large databases such as SDSS and MAST are essential resources for many researchers, much pointed, ground-based observational data exist only on local servers, with minimal curation. Some astronomy data are easily discoverable and usable, but many are not. International coordination activities such as IVOA and distributed access to high-level data products servers such as SIMBAD and NED are enabling further integration of published data. Astronomers are tackling yet more challenges in new forms of publishing data, algorithms, visualizations, and in assuring interoperability with parallel infrastructure efforts in related fields. New issues include data citation, attribution, and provenance. Substantial concerns remain for the long term discoverability, accessibility, usability, and curation of astronomy data and other scholarly resources. The presentation will outline these challenges, how they are being addressed by astronomy and related fields, and identify concerns and accomplishments expressed by the astronomers we have interviewed and observed.

  9. Quantum computing on encrypted data

    NASA Astrophysics Data System (ADS)

    Fisher, K. A. G.; Broadbent, A.; Shalm, L. K.; Yan, Z.; Lavoie, J.; Prevedel, R.; Jennewein, T.; Resch, K. J.

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  10. The event notification and alarm system for the Open Science Grid operations center

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Teige and, S.; Quick, R.

    2012-12-01

    The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.

  11. Dynamics of list-server discussion on genetically modified foods.

    PubMed

    Triunfol, Marcia L; Hines, Pamela J

    2004-04-01

    Computer-mediated discussion lists, or list-servers, are popular tools in settings ranging from professional to personal to educational. A discussion list on genetically modified food (GMF) was created in September 2000 as part of the Forum on Genetically Modified Food developed by Science Controversies: Online Partnerships in Education (SCOPE), an educational project that uses computer resources to aid research and learning around unresolved scientific questions. The discussion list "GMF-Science" was actively supported from January 2001 to May 2002. The GMF-Science list welcomed anyone interested in discussing the controversies surrounding GMF. Here, we analyze the dynamics of the discussions and how the GMF-Science list may contribute to learning. Activity on the GMF-Science discussion list reflected some but not all the controversies that were appearing in more traditional publication formats, broached other topics not well represented in the published literature, and tended to leave undiscussed the more technical research developments.

  12. Biotechnology software in the digital age: are you winning?

    PubMed

    Scheitz, Cornelia Johanna Franziska; Peck, Lawrence J; Groban, Eli S

    2018-01-16

    There is a digital revolution taking place and biotechnology companies are slow to adapt. Many pharmaceutical, biotechnology, and industrial bio-production companies believe that software must be developed and maintained in-house and that data are more secure on internal servers than on the cloud. In fact, most companies in this space continue to employ large IT and software teams and acquire computational infrastructure in the form of in-house servers. This is due to a fear of the cloud not sufficiently protecting in-house resources and the belief that their software is valuable IP. Over the next decade, the ability to quickly adapt to changing market conditions, with agile software teams, will quickly become a compelling competitive advantage. Biotechnology companies that do not adopt the new regime may lose on key business metrics such as return on invested capital, revenue, profitability, and eventually market share.

  13. Quantum computing on encrypted data.

    PubMed

    Fisher, K A G; Broadbent, A; Shalm, L K; Yan, Z; Lavoie, J; Prevedel, R; Jennewein, T; Resch, K J

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  14. The Standard Autonomous File Server, A Customized, Off-the-Shelf Success Story

    NASA Technical Reports Server (NTRS)

    Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper describes the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system has been so successful; it is becoming a NASA standard resource, leading to its nomination for NASA's Software of the Year Award in 1999.

  15. Uniform Data Access Using GXD

    NASA Technical Reports Server (NTRS)

    Vanderbilt, Peter

    1999-01-01

    This paper gives an overview of GXD, a framework facilitating publication and use of data from diverse data sources. GXD defines an object-oriented data model designed to represent a wide range of things including data, its metadata, resources and query results. GXD also defines a data transport language. a dialect of XML, for representing instances of the data model. This language allows for a wide range of data source implementations by supporting both the direct incorporation of data and the specification of data by various rules. The GXD software library, proto-typed in Java, includes client and server runtimes. The server runtime facilitates the generation of entities containing data encoded in the GXD transport language. The GXD client runtime interprets these entities (potentially from many data sources) to create an illusion of a globally interconnected data space, one that is independent of data source location and implementation.

  16. Empowering radiologic education on the Internet: a new virtual website technology for hosting interactive educational content on the World Wide Web.

    PubMed

    Frank, M S; Dreyer, K

    2001-06-01

    We describe a virtual web site hosting technology that enables educators in radiology to emblazon and make available for delivery on the world wide web their own interactive educational content, free from dependencies on in-house resources and policies. This suite of technologies includes a graphically oriented software application, designed for the computer novice, to facilitate the input, storage, and management of domain expertise within a database system. The database stores this expertise as choreographed and interlinked multimedia entities including text, imagery, interactive questions, and audio. Case-based presentations or thematic lectures can be authored locally, previewed locally within a web browser, then uploaded at will as packaged knowledge objects to an educator's (or department's) personal web site housed within a virtual server architecture. This architecture can host an unlimited number of unique educational web sites for individuals or departments in need of such service. Each virtual site's content is stored within that site's protected back-end database connected to Internet Information Server (Microsoft Corp, Redmond WA) using a suite of Active Server Page (ASP) modules that incorporate Microsoft's Active Data Objects (ADO) technology. Each person's or department's electronic teaching material appears as an independent web site with different levels of access--controlled by a username-password strategy--for teachers and students. There is essentially no static hypertext markup language (HTML). Rather, all pages displayed for a given site are rendered dynamically from case-based or thematic content that is fetched from that virtual site's database. The dynamically rendered HTML is displayed within a web browser in a Socratic fashion that can assess the recipient's current fund of knowledge while providing instantaneous user-specific feedback. Each site is emblazoned with the logo and identification of the participating institution. Individuals with teacher-level access can use a web browser to upload new content as well as manage content already stored on their virtual site. Each virtual site stores, collates, and scores participants' responses to the interactive questions posed on line. This virtual web site strategy empowers the educator with an end-to-end solution for creating interactive educational content and hosting that content within the educator's personalized and protected educational site on the world wide web, thus providing a valuable outlet that can magnify the impact of his or her talents and contributions.

  17. DiseaseConnect: a comprehensive web server for mechanism-based disease–disease connections

    PubMed Central

    Liu, Chun-Chi; Tseng, Yu-Ting; Li, Wenyuan; Wu, Chia-Yu; Mayzus, Ilya; Rzhetsky, Andrey; Sun, Fengzhu; Waterman, Michael; Chen, Jeremy J. W.; Chaudhary, Preet M.; Loscalzo, Joseph; Crandall, Edward; Zhou, Xianghong Jasmine

    2014-01-01

    The DiseaseConnect (http://disease-connect.org) is a web server for analysis and visualization of a comprehensive knowledge on mechanism-based disease connectivity. The traditional disease classification system groups diseases with similar clinical symptoms and phenotypic traits. Thus, diseases with entirely different pathologies could be grouped together, leading to a similar treatment design. Such problems could be avoided if diseases were classified based on their molecular mechanisms. Connecting diseases with similar pathological mechanisms could inspire novel strategies on the effective repositioning of existing drugs and therapies. Although there have been several studies attempting to generate disease connectivity networks, they have not yet utilized the enormous and rapidly growing public repositories of disease-related omics data and literature, two primary resources capable of providing insights into disease connections at an unprecedented level of detail. Our DiseaseConnect, the first public web server, integrates comprehensive omics and literature data, including a large amount of gene expression data, Genome-Wide Association Studies catalog, and text-mined knowledge, to discover disease–disease connectivity via common molecular mechanisms. Moreover, the clinical comorbidity data and a comprehensive compilation of known drug–disease relationships are additionally utilized for advancing the understanding of the disease landscape and for facilitating the mechanism-based development of new drug treatments. PMID:24895436

  18. Smart cloud system with image processing server in diagnosing brain diseases dedicated for hospitals with limited resources.

    PubMed

    Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny

    2017-01-01

    The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.

  19. Dynamic mapping of EDDL device descriptions to OPC UA

    NASA Astrophysics Data System (ADS)

    Atta Nsiah, Kofi; Schappacher, Manuel; Sikora, Axel

    2017-07-01

    OPC UA (Open Platform Communications Unified Architecture) is already a well-known concept used widely in the automation industry. In the area of factory automation, OPC UA models the underlying field devices such as sensors and actuators in an OPC UA server to allow connecting OPC UA clients to access device-specific information via a standardized information model. One of the requirements of the OPC UA server to represent field device data using its information model is to have advanced knowledge about the properties of the field devices in the form of device descriptions. The international standard IEC 61804 specifies EDDL (Electronic Device Description Language) as a generic language for describing the properties of field devices. In this paper, the authors describe a possibility to dynamically map and integrate field device descriptions based on EDDL into OPCUA.

  20. Solid waste information and tracking system client-server conversion project management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, D.L.

    1998-04-15

    This Project Management Plan is the lead planning document governing the proposed conversion of the Solid Waste Information and Tracking System (SWITS) to a client-server architecture. This plan presents the content specified by American National Standards Institute (ANSI)/Institute of Electrical and Electronics Engineers (IEEE) standards for software development, with additional information categories deemed to be necessary to describe the conversion fully. This plan is a living document that will be reviewed on a periodic basis and revised when necessary to reflect changes in baseline design concepts and schedules. This PMP describes the background, planning and management of the SWITS conversion.more » It does not constitute a statement of product requirements. Requirements and specification documentation needed for the SWITS conversion will be released as supporting documents.« less

  1. Implementation of an interactive database interface utilizing HTML, PHP, JavaScript, and MySQL in support of water quality assessments in the Northeastern North Carolina Pasquotank Watershed

    NASA Astrophysics Data System (ADS)

    Guion, A., Jr.; Hodgkins, H.

    2015-12-01

    The Center of Excellence in Remote Sensing Education and Research (CERSER) has implemented three research projects during the summer Research Experience for Undergraduates (REU) program gathering water quality data for local waterways. The data has been compiled manually utilizing pen and paper and then entered into a spreadsheet. With the spread of electronic devices capable of interacting with databases, the development of an electronic method of entering and manipulating the water quality data was pursued during this project. This project focused on the development of an interactive database to gather, display, and analyze data collected from local waterways. The database and entry form was built in MySQL on a PHP server allowing participants to enter data from anywhere Internet access is available. This project then researched applying this data to the Google Maps site to provide labeling and information to users. The NIA server at http://nia.ecsu.edu is used to host the application for download and for storage of the databases. Water Quality Database Team members included the authors plus Derek Morris Jr., Kathryne Burton and Mr. Jeff Wood as mentor.

  2. Web-based DAQ systems: connecting the user and electronics front-ends

    NASA Astrophysics Data System (ADS)

    Lenzi, Thomas

    2016-12-01

    Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.

  3. An Improved and Secure Anonymous Biometric-Based User Authentication with Key Agreement Scheme for the Integrated EPR Information System.

    PubMed

    Jung, Jaewook; Kang, Dongwoo; Lee, Donghoon; Won, Dongho

    2017-01-01

    Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency.

  4. An Improved and Secure Anonymous Biometric-Based User Authentication with Key Agreement Scheme for the Integrated EPR Information System

    PubMed Central

    Kang, Dongwoo; Lee, Donghoon; Won, Dongho

    2017-01-01

    Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency. PMID:28046075

  5. Network characteristics for server selection in online games

    NASA Astrophysics Data System (ADS)

    Claypool, Mark

    2008-01-01

    Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.

  6. The NIF DISCO Framework: facilitating automated integration of neuroscience content on the web.

    PubMed

    Marenco, Luis; Wang, Rixin; Shepherd, Gordon M; Miller, Perry L

    2010-06-01

    This paper describes the capabilities of DISCO, an extensible approach that supports integrative Web-based information dissemination. DISCO is a component of the Neuroscience Information Framework (NIF), an NIH Neuroscience Blueprint initiative that facilitates integrated access to diverse neuroscience resources via the Internet. DISCO facilitates the automated maintenance of several distinct capabilities using a collection of files 1) that are maintained locally by the developers of participating neuroscience resources and 2) that are "harvested" on a regular basis by a central DISCO server. This approach allows central NIF capabilities to be updated as each resource's content changes over time. DISCO currently supports the following capabilities: 1) resource descriptions, 2) "LinkOut" to a resource's data items from NCBI Entrez resources such as PubMed, 3) Web-based interoperation with a resource, 4) sharing a resource's lexicon and ontology, 5) sharing a resource's database schema, and 6) participation by the resource in neuroscience-related RSS news dissemination. The developers of a resource are free to choose which DISCO capabilities their resource will participate in. Although DISCO is used by NIF to facilitate neuroscience data integration, its capabilities have general applicability to other areas of research.

  7. A streaming-based solution for remote visualization of 3D graphics on mobile devices.

    PubMed

    Lamberti, Fabrizio; Sanna, Andrea

    2007-01-01

    Mobile devices such as Personal Digital Assistants, Tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications are now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, Personal Digital Assistants (PDAs), and Tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed.

  8. The BioExtract Server: a web-based bioinformatic workflow platform

    PubMed Central

    Lushbough, Carol M.; Jennewein, Douglas M.; Brendel, Volker P.

    2011-01-01

    The BioExtract Server (bioextract.org) is an open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet. PMID:21546552

  9. Tertiary structure prediction and identification of druggable pocket in the cancer biomarker – Osteopontin-c

    PubMed Central

    2014-01-01

    Background Osteopontin (Eta, secreted sialoprotein 1, opn) is secreted from different cell types including cancer cells. Three splice variant forms namely osteopontin-a, osteopontin-b and osteopontin-c have been identified. The main astonishing feature is that osteopontin-c is found to be elevated in almost all types of cancer cells. This was the vital point to consider it for sequence analysis and structure predictions which provide ample chances for prognostic, therapeutic and preventive cancer research. Methods Osteopontin-c gene sequence was determined from Breast Cancer sample and was translated to protein sequence. It was then analyzed using various software and web tools for binding pockets, docking and druggability analysis. Due to the lack of homological templates, tertiary structure was predicted using ab-initio method server – I-TASSER and was evaluated after refinement using web tools. Refined structure was compared with known bone sialoprotein electron microscopic structure and docked with CD44 for binding analysis and binding pockets were identified for drug designing. Results Signal sequence of about sixteen amino acid residues was identified using signal sequence prediction servers. Due to the absence of known structures of similar proteins, three dimensional structure of osteopontin-c was predicted using I-TASSER server. The predicted structure was refined with the help of SUMMA server and was validated using SAVES server. Molecular dynamic analysis was carried out using GROMACS software. The final model was built and was used for docking with CD44. Druggable pockets were identified using pocket energies. Conclusions The tertiary structure of osteopontin-c was predicted successfully using the ab-initio method and the predictions showed that osteopontin-c is of fibrous nature comparable to firbronectin. Docking studies showed the significant similarities of QSAET motif in the interaction of CD44 and osteopontins between the normal and splice variant forms of osteopontins and binding pockets analyses revealed several pockets which paved the way to the identification of a druggable pocket. PMID:24401206

  10. Applications of the pipeline environment for visual informatics and genomics computations

    PubMed Central

    2011-01-01

    Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102

  11. The PDB_REDO server for macromolecular structure model optimization.

    PubMed

    Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis

    2014-07-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.

  12. The PDB_REDO server for macromolecular structure model optimization

    PubMed Central

    Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis

    2014-01-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo­graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  13. Enhancing Access to Drought Information Using the CUAHSI Hydrologic Information System

    NASA Astrophysics Data System (ADS)

    Schreuders, K. A.; Tarboton, D. G.; Horsburgh, J. S.; Sen Gupta, A.; Reeder, S.

    2011-12-01

    The National Drought Information System (NIDIS) Upper Colorado River Basin pilot study is investigating and establishing capabilities for better dissemination of drought information for early warning and management. As part of this study we are using and extending functionality from the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) to provide better access to drought-related data in the Upper Colorado River Basin. The CUAHSI HIS is a federated system for sharing hydrologic data. It is comprised of multiple data servers, referred to as HydroServers, that publish data in a standard XML format called Water Markup Language (WaterML), using web services referred to as WaterOneFlow web services. HydroServers can also publish geospatial data using Open Geospatial Consortium (OGC) web map, feature and coverage services and are capable of hosting web and map applications that combine geospatial datasets with observational data served via web services. HIS also includes a centralized metadata catalog that indexes data from registered HydroServers and a data access client referred to as HydroDesktop. For NIDIS, we have established a HydroServer to publish drought index values as well as the input data used in drought index calculations. Primary input data required for drought index calculation include streamflow, precipitation, reservoir storages, snow water equivalent, and soil moisture. We have developed procedures to redistribute the input data to the time and space scales chosen for drought index calculation, namely half monthly time intervals for HUC 10 subwatersheds. The spatial redistribution approaches used for each input parameter are dependent on the spatial linkages for that parameter, i.e., the redistribution procedure for streamflow is dependent on the upstream/downstream connectivity of the stream network, and the precipitation redistribution procedure is dependent on elevation to account for orographic effects. A set of drought indices are then calculated from the redistributed data. We have created automated data and metadata harvesters that periodically scan and harvest new data from each of the input databases, and calculates extensions to the resulting derived data sets, ensuring that the data available on the drought server is kept up to date. This paper will describe this system, showing how it facilitates the integration of data from multiple sources to inform the planning and management of water resources during drought. The system may be accessed at http://drought.usu.edu.

  14. Development of Internet algorithms and some calculations of power plant COP

    NASA Astrophysics Data System (ADS)

    Ustjuzhanin, E. E.; Ochkov, V. F.; Znamensky, V. E.

    2017-11-01

    The authors have analyzed Internet resources containing information on some thermodynamic properties of technically important substances (the water, the air etc.). There are considered databases those possess such resources and are hosted in organizations (Joint Institute for High Temperatures (Russian Academy of Sciences), Standartinform (Russia), National Institute of Standards and Technology (USA), Institute for Thermal Physics (Siberian Branch of the Russian Academy of Sciences), etc.). Currently, a typical form is an Internet resource that includes a text file, for example, it is a file containing tabulated properties, R = (ρ, s, h…), here ρ - the density, s - the entropy, h - the enthalpy of a substance. It is known a small number of Internet resources those have the following characteristic. The resource allows a customer to realize a number of options, for example: i) to enter the input data, Y = (p, T), here p - the pressure, T - the temperature, ii) to calculate R property using “an exe-file” program, iii) to copy the result X = (p, T, ρ, h, s, …). Recently, some researchers (including the authors of this report) have requested a software (SW) that is designed for R property calculations and has a form of an open interactive (OI) Internet resource (“a client function”, “template”). A computing part of OI resource is linked: 1) with a formula, which is applied to calculate R property, 2) with a Mathcad program, Code_1(R,Y). An interactive part of OI resource is based on Informatics and Internet technologies. We have proposed some methods and tools those are related to this part and let us: a) to post OI resource on a remote server, b) to link a client PC with the remote server, c) to implement a number of options to clients. Among these options, there are: i) to calculate R property at given Y arguments, ii) to copy mathematical formulas, iii) to copy Code_1(R,Y) as a whole. We have developed some OI - resources those are focused on sharing: a) SW that is used to design power plants, for an example, Code - GTP_1(Z,R,Y) and b) client functions those are aimed to determine R properties of the working fluid at fixed points of the thermodynamic cycle. The program let us calculate energy criteria, Z, including the internal coefficient of performance (COP) for a power plant. We have discussed OI resources, among them OI resource that includes Code - GTP_1(Z,R,Y) and connected with a complex power plant included: i) several gas turbines, i) several compressors etc.

  15. Survey Software Evaluation

    DTIC Science & Technology

    2009-01-01

    Oracle 9i, 10g  MySQL  MS SQL Server MS SQL Server Operating System Supported Windows 2003 Server  Windows 2000 Server (32 bit...WebStar (Mac OS X)  SunOne Internet Information Services (IIS) Database Server Supported MS SQL Server  MS SQL Server  Oracle 9i, 10g...challenges of Web-based surveys are: 1) identifying the best Commercial Off the Shelf (COTS) Web-based survey packages to serve the particular

  16. Measurement of Energy Performances for General-Structured Servers

    NASA Astrophysics Data System (ADS)

    Liu, Ren; Chen, Lili; Li, Pengcheng; Liu, Meng; Chen, Haihong

    2017-11-01

    Energy consumption of servers in data centers increases rapidly along with the wide application of Internet and connected devices. To improve the energy efficiency of servers, voluntary or mandatory energy efficiency programs for servers, including voluntary label program or mandatory energy performance standards have been adopted or being prepared in the US, EU and China. However, the energy performance of servers and testing methods of servers are not well defined. This paper presents matrices to measure the energy performances of general-structured servers. The impacts of various components of servers on their energy performances are also analyzed. Based on a set of normalized workload, the author proposes a standard method for testing energy efficiency of servers. Pilot tests are conducted to assess the energy performance testing methods of servers. The findings of the tests are discussed in the paper.

  17. Mergeomics: a web server for identifying pathological pathways, networks, and key regulators via multidimensional data integration.

    PubMed

    Arneson, Douglas; Bhattacharya, Anindya; Shu, Le; Mäkinen, Ville-Petteri; Yang, Xia

    2016-09-09

    Human diseases are commonly the result of multidimensional changes at molecular, cellular, and systemic levels. Recent advances in genomic technologies have enabled an outpour of omics datasets that capture these changes. However, separate analyses of these various data only provide fragmented understanding and do not capture the holistic view of disease mechanisms. To meet the urgent needs for tools that effectively integrate multiple types of omics data to derive biological insights, we have developed Mergeomics, a computational pipeline that integrates multidimensional disease association data with functional genomics and molecular networks to retrieve biological pathways, gene networks, and central regulators critical for disease development. To make the Mergeomics pipeline available to a wider research community, we have implemented an online, user-friendly web server ( http://mergeomics. idre.ucla.edu/ ). The web server features a modular implementation of the Mergeomics pipeline with detailed tutorials. Additionally, it provides curated genomic resources including tissue-specific expression quantitative trait loci, ENCODE functional annotations, biological pathways, and molecular networks, and offers interactive visualization of analytical results. Multiple computational tools including Marker Dependency Filtering (MDF), Marker Set Enrichment Analysis (MSEA), Meta-MSEA, and Weighted Key Driver Analysis (wKDA) can be used separately or in flexible combinations. User-defined summary-level genomic association datasets (e.g., genetic, transcriptomic, epigenomic) related to a particular disease or phenotype can be uploaded and computed real-time to yield biologically interpretable results, which can be viewed online and downloaded for later use. Our Mergeomics web server offers researchers flexible and user-friendly tools to facilitate integration of multidimensional data into holistic views of disease mechanisms in the form of tissue-specific key regulators, biological pathways, and gene networks.

  18. CIS3/398: Implementation of a Web-Based Electronic Patient Record for Transplant Recipients

    PubMed Central

    Fritsche, L; Lindemann, G; Schroeter, K; Schlaefer, A; Neumayer, H-H

    1999-01-01

    Introduction While the "Electronic patient record" (EPR) is a frequently quoted term in many areas of healthcare, only few working EPR-systems are available so far. To justify their use, EPRs must be able to store and display all kinds of medical information in a reliable, secure, time-saving, user-friendly way at an affordable price. Fields with patients who are attended to by a large number of medical specialists over a prolonged period of time are best suited to demonstrate the potential benefits of an EPR. The aim of our project was to investigate the feasibility of an EPR based solely on "of-the-shelf"-software and Internet-technology in the field of organ transplantation. Methods The EPR-system consists of three main elements: Data-storage facilities, a Web-server and a user-interface. Data are stored either in a relational database (Sybase Adaptive 11.5, Sybase Inc., CA) or in case of pictures (JPEG) and files in application formats (e. g. Word-Documents) on a Windows NT 4.0 Server (Microsoft Corp., WA). The entire communication of all data is handled by a Web-server (IIS 4.0, Microsoft) with an Active Server Pages extension. The database is accessed by ActiveX Data Objects via the ODBC-interface. The only software required on the user's computer is the Internet Explorer 4.01 (Microsoft), during the first use of the EPR, the ActiveX HTML Layout Control is automatically added. The user can access the EPR via Local or Wide Area Network or by dial-up connection. If the EPR is accessed from outside the firewall, all communication is encrypted (SSL 3.0, Netscape Comm. Corp., CA).The speed of the EPR-system was tested with 50 repeated measurements of the duration of two key-functions: 1) Display of all lab results for a given day and patient and 2) automatic composition of a letter containing diagnoses, medication, notes and lab results. For the test a 233 MHz Pentium II Processor with 10 Mbit/s Ethernet connection (ping-time below 10 ms) over 2 hubs to the server (400 MHz Pentium II, 256 MB RAM) was used. Results So far the EPR-system has been running for eight consecutive months and contains complete records of 673 transplant recipients with an average follow-up of 9.9 (SD :4.9) years and a total of 1.1 million lab values. Instruction to enable new users to perform basic operations took less than two hours in all cases. The average duration of laboratory access was 0.9 (SD:0.5) seconds, the automatic composition of a letter took 6.1 (SD:2.4) seconds. Apart from the database and Windows NT, all other components are available for free. The development of the EPR-system required less than two person-years. Conclusion Implementation of an Electronic patient record that meets the requirements of comprehensiveness, reliability, security, speed, user-friendliness and affordability using a combination of "of-the-shelf" software-products can be feasible, if the current state-of-the-art internet technology is applied.

  19. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  20. THttpServer class in ROOT

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, Joern; Linev, Sergey

    2015-12-01

    The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.

  1. Magnetic Thin Films for Perpendicular Magnetic Recording Systems

    NASA Astrophysics Data System (ADS)

    Sugiyama, Atsushi; Hachisu, Takuma; Osaka, Tetsuya

    In the advanced information society of today, information storage technology, which helps to store a mass of electronic data and offers high-speed random access to the data, is indispensable. Against this background, hard disk drives (HDD), which are magnetic recording devices, have gained in importance because of their advantages in capacity, speed, reliability, and production cost. These days, the uses of HDD extend not only to personal computers and network servers but also to consumer electronics products such as personal video recorders, portable music players, car navigation systems, video games, video cameras, and personal digital assistances.

  2. Characteristics and Energy Use of Volume Servers in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuchs, H.; Shehabi, A.; Ganeshalingam, M.

    Servers’ field energy use remains poorly understood, given heterogeneous computing loads, configurable hardware and software, and operation over a wide range of management practices. This paper explores various characteristics of 1- and 2-socket volume servers that affect energy consumption, and quantifies the difference in power demand between higher-performing SPEC and ENERGY STAR servers and our best understanding of a typical server operating today. We first establish general characteristics of the U.S. installed base of volume servers from existing IDC data and the literature, before presenting information on server hardware configurations from data collection events at a major online retail website.more » We then compare cumulative distribution functions of server idle power across three separate datasets and explain the differences between them via examination of the hardware characteristics to which power draw is most sensitive. We find that idle server power demand is significantly higher than ENERGY STAR benchmarks and the industry-released energy use documented in SPEC, and that SPEC server configurations—and likely the associated power-scaling trends—are atypical of volume servers. Next, we examine recent trends in server power draw among high-performing servers across their full load range to consider how representative these trends are of all volume servers before inputting weighted average idle power load values into a recently published model of national server energy use. Finally, we present results from two surveys of IT managers (n=216) and IT vendors (n=178) that illustrate the prevalence of more-efficient equipment and operational practices in server rooms and closets; these findings highlight opportunities to improve the energy efficiency of the U.S. server stock.« less

  3. Resource Isolation Method for Program’S Performance on CMP

    NASA Astrophysics Data System (ADS)

    Guan, Ti; Liu, Chunxiu; Xu, Zheng; Li, Huicong; Ma, Qiang

    2017-10-01

    Data center and cloud computing are more popular, which make more benefits for customers and the providers. However, in data center or clusters, commonly there is more than one program running on one server, but programs may interference with each other. The interference may take a little effect, however, the interference may cause serious drop down of performance. In order to avoid the performance interference problem, the mechanism of isolate resource for different programs is a better choice. In this paper we propose a light cost resource isolation method to improve program’s performance. This method uses Cgroups to set the dedicated CPU and memory resource for a program, aiming at to guarantee the program’s performance. There are three engines to realize this method: Program Monitor Engine top program’s resource usage of CPU and memory and transfer the information to Resource Assignment Engine; Resource Assignment Engine calculates the size of CPU and memory resource should be applied for the program; Cgroups Control Engine divide resource by Linux tool Cgroups, and drag program in control group for execution. The experiment result show that making use of the resource isolation method proposed by our paper, program’s performance can be improved.

  4. Mobile Assisted Security in Wireless Sensor Networks

    DTIC Science & Technology

    2015-08-03

    server from Google’s DNS, Chromecast and the content server does the 3-way TCP Handshake which is followed by Client Hello and Server Hello TLS messages...utilized TLS v1.2, except NTP servers and google’s DNS server. In the TLS v1.2, after handshake, client and server sends Client Hello and Server Hello ...Messages in order. In Client Hello messages, client offers a list of Cipher Suites that it supports. Each Cipher Suite defines the key exchange algorithm

  5. Resource Public Key Infrastructure Extension

    DTIC Science & Technology

    2012-01-01

    tests for checking compliance with the RFC 3779 extensions that are used in the RPKI. These tests also were used to identify an error in the OPENSSL ...rsync, OpenSSL , Cryptlib, and MySQL/ODBC. We assume that the adversaries can exploit any publicly known vulnerability in this software. • Server...NULL, set FLAG_NOCHAIN in Ctemp, defer verification. T = P Use OpenSSL to verify certificate chain S using trust anchor T, checking signature and

  6. The Next Information Revolution in Astronomy

    NASA Astrophysics Data System (ADS)

    Kennicutt, R. C.

    2006-08-01

    The information revolution has truly revolutionized our profession, through such innovations as the astronomical data centres, electronic journals and preprint servers, and bibliographic interfaces that link these resources through instantaneous and freely available web interfaces. For most of us the effects of these innovations have been profound, changing forever the way we access the research literature, disseminate results to our colleagues, and even in the ways we carry out our research and write papers. Astronomy's efforts in this area have attracted the attention and admiration of other scientific professions as well as the information technology community. We now stand at the threshold of a second revolution, in which enormous and rich collections of astronomical observations, models, software, and tools will be accessible through a common Virtual Observatory interface. The next logical step beyond that is an integration of these VO resources with the web of astronomical literature, to provide mechanisms for quality certification of those resources, and to provide a seamless mechanism by which authors can make the results of their research available to other scientists in their most useful form. If this is done successfully its impacts on the way we conduct and disseminate our research may be as profound as those of the past decade. However this success will require cooperative approaches to information archiving and publication involving the data centres, journals, and library communities, and which incorporate or at least emulate the features of curation, provenance, quality assurance, and intellectual property protections that underlie the traditional publishing system. This talk will highlight some of the efforts being made in the VO and journal communities to make this vision a reality, and identify some of the key challenges that remain.

  7. [Development of a System to Use Patient's Information Which is Required at the Radiological Department].

    PubMed

    Satoh, Akihiro

    2016-04-01

    The purpose of this study is to develop a new system to get and share some data of a patient which are required for a radiological examination not using an electronic medical chart or a radiological information system (RIS), and also to demonstrate that this system is operated on cloud technology. I used Java Enterprise Edition (Java EE) as a programing language and MySQL as a server software, and I used two laptops as hardware for client computer and server computer. For cloud computing, I hired a server of Google App Engine for Java (GAE). As a result, I could get some data of the patient required at his/her examination instantly using this system. This system also helps to improve the efficiency of examination. For example, it has been useful when I want to decide radiographic condition or to create CT images such as multi-planar reconstruction (MPR) or volume rendering (VR). When it comes to cloud computing, the GAE was used experimentally due to some legal restrictions. From the above points it is clear that this system has played an important role in radiological examinations, but there has been still few things which I have to resolve for cloud computing.

  8. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, X; Liu, L; Xing, L

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less

  9. Federated Tensor Factorization for Computational Phenotyping

    PubMed Central

    Kim, Yejin; Sun, Jimeng; Yu, Hwanjo; Jiang, Xiaoqian

    2017-01-01

    Tensor factorization models offer an effective approach to convert massive electronic health records into meaningful clinical concepts (phenotypes) for data analysis. These models need a large amount of diverse samples to avoid population bias. An open challenge is how to derive phenotypes jointly across multiple hospitals, in which direct patient-level data sharing is not possible (e.g., due to institutional policies). In this paper, we developed a novel solution to enable federated tensor factorization for computational phenotyping without sharing patient-level data. We developed secure data harmonization and federated computation procedures based on alternating direction method of multipliers (ADMM). Using this method, the multiple hospitals iteratively update tensors and transfer secure summarized information to a central server, and the server aggregates the information to generate phenotypes. We demonstrated with real medical datasets that our method resembles the centralized training model (based on combined datasets) in terms of accuracy and phenotypes discovery while respecting privacy. PMID:29071165

  10. Omokage search: shape similarity search service for biomolecular structures in both the PDB and EMDB.

    PubMed

    Suzuki, Hirofumi; Kawabata, Takeshi; Nakamura, Haruki

    2016-02-15

    Omokage search is a service to search the global shape similarity of biological macromolecules and their assemblies, in both the Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB). The server compares global shapes of assemblies independent of sequence order and number of subunits. As a search query, the user inputs a structure ID (PDB ID or EMDB ID) or uploads an atomic model or 3D density map to the server. The search is performed usually within 1 min, using one-dimensional profiles (incremental distance rank profiles) to characterize the shapes. Using the gmfit (Gaussian mixture model fitting) program, the found structures are fitted onto the query structure and their superimposed structures are displayed on the Web browser. Our service provides new structural perspectives to life science researchers. Omokage search is freely accessible at http://pdbj.org/omokage/. © The Author 2015. Published by Oxford University Press.

  11. OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software

    NASA Astrophysics Data System (ADS)

    Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.

    2006-12-01

    OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for a SOAP interface, the new server includes support for the THREDDS cataloging protocol. THREDDS is tightly integrated into the Front-End of Server4. The Server4 Front-End can make full use of the advanced THREDDS features such as attribute specification and inheritance, custom catalogs which segue into automatically generated catalogs as well as providing a default behavior which requires almost no catalog configuration.

  12. An extensible and lightweight architecture for adaptive server applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorton, Ian; Liu, Yan; Trivedi, Nihar

    2008-07-10

    Server applications augmented with behavioral adaptation logic can react to environmental changes, creating self-managing server applications with improved quality of service at runtime. However, developing adaptive server applications is challenging due to the complexity of the underlying server technologies and highly dynamic application environments. This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications. ASF provides a clear separation between the implementation of adaptive behavior and the business logic of the server application. This means a server application can be extended with programmable adaptive features through the definitionmore » and implementation of control components defined in ASF. Furthermore, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the erformance gains possible by adaptive behavior and the low overhead introduced by ASF.« less

  13. Thirty Meter Telescope (TMT) Narrow Field Infrared Adaptive Optics System (NFIRAOS) real-time controller preliminary architecture

    NASA Astrophysics Data System (ADS)

    Kerley, Dan; Smith, Malcolm; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi

    2016-08-01

    The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR).

  14. [From planning to realization of an electronic patient record].

    PubMed

    Krämer, T; Rapp, R; Krämer, K L

    1999-03-01

    The high complex requirements on information and information flow in todays hospitals can only be accomplished by the use of modern Information Systems (IS). In order to achieve this, the Stiftung Orthopädische Universitätsklinik has carried out first the Project "Strategic Informations System Planning" in 1993. Then realizing the necessary infrastructure (network; client-server) from 1993 to 1997, and finally started the introduction of modern IS (SAP R/3 and IXOS-Archive) in the clinical area. One of the approved goal was the replacement of the paper medical record by an up-to-date electronical medical record. In this article the following three topics will be discussed: the difference between the up-to-date electronical medical record and the electronically archived finished cases, steps performed by our clinic to realize the up-to-date electronical medical record and the problems occurred during this process.

  15. From planning to realisation of an electronic patient record.

    PubMed

    Krämer, T; Rapp, R; Krämer, K-L

    1999-03-01

    The high complex requirements on information and information flow in todays hospitals can only be accomplished by the use of modern Information Systems (IS). In order to achieve this, the Stiftung Orthopädische Universitätsklinik has carried out first the Project "Strategic Informations System Planning" in 1993. Then realizing the neccessary infrastructure (network; client-server) from 1993 to 1997, and finally started the introduction of modern IS (SAP R/3 and IXOS-Archive) in the clinical area. One of the approved goal was the replacement of the paper medical record by an up-to-date electronical medical record. In this article the following three topics will be discussed: the difference between the up-to-date electronical medical record and the electronically archived finished cases, steps performed by our clinic to realize the up-to-date electronical medical record and the problems occured during this process.

  16. FELIX: a PCIe based high-throughput approach for interfacing front-end and trigger electronics in the ATLAS Upgrade framework

    NASA Astrophysics Data System (ADS)

    Anderson, J.; Bauer, K.; Borga, A.; Boterenbrood, H.; Chen, H.; Chen, K.; Drake, G.; Dönszelmann, M.; Francis, D.; Guest, D.; Gorini, B.; Joos, M.; Lanni, F.; Lehmann Miotto, G.; Levinson, L.; Narevicius, J.; Panduro Vazquez, W.; Roich, A.; Ryu, S.; Schreuder, F.; Schumacher, J.; Vandelli, W.; Vermeulen, J.; Whiteson, D.; Wu, W.; Zhang, J.

    2016-12-01

    The ATLAS Phase-I upgrade (2019) requires a Trigger and Data Acquisition (TDAQ) system able to trigger and record data from up to three times the nominal LHC instantaneous luminosity. The Front-End LInk eXchange (FELIX) system provides an infrastructure to achieve this in a scalable, detector agnostic and easily upgradeable way. It is a PC-based gateway, interfacing custom radiation tolerant optical links from front-end electronics, via PCIe Gen3 cards, to a commodity switched Ethernet or InfiniBand network. FELIX enables reducing custom electronics in favour of software running on commercial servers. The FELIX system, the design of the PCIe prototype card and the integration test results are presented in this paper.

  17. Integrating multi-view transmission system into MPEG-21 stereoscopic and multi-view DIA (digital item adaptation)

    NASA Astrophysics Data System (ADS)

    Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran

    2006-10-01

    As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To solve for the former, IVR (intermediate view reconstruction) is employed for smooth transition between two stereoscopic view sequences. As well, a disparity adjustment scheme is used for the latter. Finally, from the implementation of testbed and the experiments, we can show the valuables and possibilities of our system.

  18. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.

  19. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11.

    PubMed

    Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten

    2008-07-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.

  20. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.

  1. BitTorious volunteer: server-side extensions for centrally-managed volunteer storage in BitTorrent swarms.

    PubMed

    Lee, Preston V; Dinu, Valentin

    2015-11-04

    Our publication of the BitTorious portal [1] demonstrated the ability to create a privatized distributed data warehouse of sufficient magnitude for real-world bioinformatics studies using minimal changes to the standard BitTorrent tracker protocol. In this second phase, we release a new server-side specification to accept anonymous philantropic storage donations by the general public, wherein a small portion of each user's local disk may be used for archival of scientific data. We have implementated the server-side announcement and control portions of this BitTorrent extension into v3.0.0 of the BitTorious portal, upon which compatible clients may be built. Automated test cases for the BitTorious Volunteer extensions have been added to the portal's v3.0.0 release, supporting validation of the "peer affinity" concept and announcement protocol introduced by this specification. Additionally, a separate reference implementation of affinity calculation has been provided in C++ for informaticians wishing to integrate into libtorrent-based projects. The BitTorrent "affinity" extensions as provided in the BitTorious portal reference implementation allow data publishers to crowdsource the extreme storage prerequisites for research in "big data" fields. With sufficient awareness and adoption of BitTorious Volunteer-based clients by the general public, the BitTorious portal may be able to provide peta-scale storage resources to the scientific community at relatively insignificant financial cost.

  2. I/O load balancing for big data HPC applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Arnab K.; Goyal, Arpit; Wang, Feiyi

    High Performance Computing (HPC) big data problems require efficient distributed storage systems. However, at scale, such storage systems often experience load imbalance and resource contention due to two factors: the bursty nature of scientific application I/O; and the complex I/O path that is without centralized arbitration and control. For example, the extant Lustre parallel file system-that supports many HPC centers-comprises numerous components connected via custom network topologies, and serves varying demands of a large number of users and applications. Consequently, some storage servers can be more loaded than others, which creates bottlenecks and reduces overall application I/O performance. Existing solutionsmore » typically focus on per application load balancing, and thus are not as effective given their lack of a global view of the system. In this paper, we propose a data-driven approach to load balance the I/O servers at scale, targeted at Lustre deployments. To this end, we design a global mapper on Lustre Metadata Server, which gathers runtime statistics from key storage components on the I/O path, and applies Markov chain modeling and a minimum-cost maximum-flow algorithm to decide where data should be placed. Evaluation using a realistic system simulator and a real setup shows that our approach yields better load balancing, which in turn can improve end-to-end performance.« less

  3. Enhanced delegated computing using coherence

    NASA Astrophysics Data System (ADS)

    Barz, Stefanie; Dunjko, Vedran; Schlederer, Florian; Moore, Merritt; Kashefi, Elham; Walmsley, Ian A.

    2016-03-01

    A longstanding question is whether it is possible to delegate computational tasks securely—such that neither the computation nor the data is revealed to the server. Recently, both a classical and a quantum solution to this problem were found [C. Gentry, in Proceedings of the 41st Annual ACM Symposium on the Theory of Computing (Association for Computing Machinery, New York, 2009), pp. 167-178; A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, 2009), pp. 517-526]. Here, we study the first step towards the interplay between classical and quantum approaches and show how coherence can be used as a tool for secure delegated classical computation. We show that a client with limited computational capacity—restricted to an XOR gate—can perform universal classical computation by manipulating information carriers that may occupy superpositions of two states. Using single photonic qubits or coherent light, we experimentally implement secure delegated classical computations between an independent client and a server, which are installed in two different laboratories and separated by 50 m . The server has access to the light sources and measurement devices, whereas the client may use only a restricted set of passive optical devices to manipulate the information-carrying light beams. Thus, our work highlights how minimal quantum and classical resources can be combined and exploited for classical computing.

  4. Design and development of a mobile system for supporting emergency triage.

    PubMed

    Michalowski, W; Slowinski, R; Wilk, S; Farion, K J; Pike, J; Rubin, S

    2005-01-01

    Our objective was to design and develop a mobile clinical decision support system for emergency triage of different acute pain presentations. The system should interact with existing hospital information systems, run on mobile computing devices (handheld computers) and be suitable for operation in weak-connectivity conditions (with unstable connections between mobile clients and a server). The MET (Mobile Emergency Triage) system was designed following an extended client-server architecture. The client component, responsible for triage decision support, is built as a knowledge-based system, with domain ontology separated from generic problem solving methods and used for the automatic creation of a user interface. The MET system is well suited for operation in the Emergency Department of a hospital. The system's external interactions are managed by the server, while the MET clients, running on handheld computers are used by clinicians for collecting clinical data and supporting triage at the bedside. The functionality of the MET client is distributed into specialized modules, responsible for triaging specific types of acute pain presentations. The modules are stored on the server, and on request they can be transferred and executed on the mobile clients. The modular design provides for easy extension of the system's functionality. A clinical trial of the MET system validated the appropriateness of the system's design, and proved the usefulness and acceptance of the system in clinical practice. The MET system captures the necessary hospital data, allows for entry of patient information, and provides triage support. By operating on handheld computers, it fits into the regular emergency department workflow without introducing any hindrances or disruptions. It supports triage anytime and anywhere, directly at the point of care, and also can be used as an electronic patient chart, facilitating structured data collection.

  5. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    NASA Astrophysics Data System (ADS)

    Stepanov, Sergey

    2013-03-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  6. Access to Inter-Organization Computer Networks.

    DTIC Science & Technology

    1985-08-01

    management of computing and information systems, system management . 20. ABSTRACT (Continue on reverse aide it neceeery end identify by block number) When two...necessary control mechanisms. Message-based gateways that support non-real-time invocation of services (e.g., file and print servers, financial ...operations (C.2.3), electronic mail (H.4.3), public policy issues (K.4.1), organizationa impacts (K.4.3), management of computing and information systems (K.6

  7. Opening the Duke electronic health record to apps: Implementing SMART on FHIR.

    PubMed

    Bloomfield, Richard A; Polo-Wood, Felipe; Mandel, Joshua C; Mandl, Kenneth D

    2017-03-01

    Recognizing a need for our EHR to be highly interoperable, our team at Duke Health enabled our Epic-based electronic health record to be compatible with the Boston Children's project called Substitutable Medical Apps and Reusable Technologies (SMART), which employed Health Level Seven International's (HL7) Fast Healthcare Interoperability Resources (FHIR), commonly known as SMART on FHIR. We created a custom SMART on FHIR-compatible server infrastructure written in Node.js that served two primary functions. First, it handled API management activities such rate-limiting, authorization, auditing, logging, and analytics. Second, it retrieved the EHR data and made it available in a FHIR-compatible format. Finally, we made required changes to the EHR user interface to allow us to integrate several compatible apps into the provider- and patient-facing EHR workflows. After integrating SMART on FHIR into our Epic-based EHR, we demonstrated several types of apps running on the infrastructure. This included both provider- and patient-facing apps as well as apps that are closed source, open source and internally-developed. We integrated the apps into the testing environment of our desktop EHR as well as our patient portal. We also demonstrated the integration of a native iOS app. In this paper, we demonstrate the successful implementation of the SMART and FHIR technologies on our Epic-based EHR and subsequent integration of several compatible provider- and patient-facing apps. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. On-line classification of pollutants in water using wireless portable electronic noses.

    PubMed

    Herrero, José Luis; Lozano, Jesús; Santos, José Pedro; Suárez, José Ignacio

    2016-06-01

    A portable electronic nose with database connection for on-line classification of pollutants in water is presented in this paper. It is a hand-held, lightweight and powered instrument with wireless communications capable of standalone operation. A network of similar devices can be configured for distributed measurements. It uses four resistive microsensors and headspace as sampling method for extracting the volatile compounds from glass vials. The measurement and control program has been developed in LabVIEW using the database connection toolkit to send the sensors data to a server for training and classification with Artificial Neural Networks (ANNs). The use of a server instead of the microprocessor of the e-nose increases the capacity of memory and the computing power of the classifier and allows external users to perform data classification. To address this challenge, this paper also proposes a web-based framework (based on RESTFul web services, Asynchronous JavaScript and XML and JavaScript Object Notation) that allows remote users to train ANNs and request classification values regardless user's location and the type of device used. Results show that the proposed prototype can discriminate the samples measured (Blank water, acetone, toluene, ammonia, formaldehyde, hydrogen peroxide, ethanol, benzene, dichloromethane, acetic acid, xylene and dimethylacetamide) with a 94% classification success rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. [Local communalization of clinical records between the municipal community hospital and local medical institutes by using information technology].

    PubMed

    Iijima, Shohei; Shinoki, Keiji; Ibata, Takeshi; Nakashita, Chisako; Doi, Seiko; Hidaka, Kumi; Hata, Akiko; Matsuoka, Mio; Waguchi, Hideko; Mito, Saori; Komuro, Ryutaro

    2012-12-01

    We introduced the electronic health record system in 2002. We produced a community medical network system to consolidate all medical treatment information from the local institute in 2010. Here, we report on the present status of this system that has been in use for the previous 2 years. We obtained a private server, set up a virtual private network(VPN)in our hospital, and installed dedicated terminals to issue an electronic certificate in 50 local institutions. The local institute applies for patient agreement in the community hospital(hospital designation style). They are then entitled to access the information of the designated patient via this local network server for one year. They can access each original medical record, sorted on the basis of the medical attendant and the chief physician; a summary of hospital stay; records of medication prescription; and the results of clinical examinations. Currently, there are approximately 80 new registrations and accesses per month. Information is provided in real time allowing up to date information, helping prescribe the medical treatment at the local institute. However, this information sharing system is read-only, and there is no cooperative clinical pass system. Therefore, this system has a limit to meet the demand for cooperation with the local clinics.

  10. An Integrated Intranet and Dynamic Database Application for the Security Manager at Naval Postgraduate School

    DTIC Science & Technology

    2002-09-01

    Basic for Applications ( VBA ) 6.0 as macros may not be supported in 8 future versions of Access. Access 2000 offers Internet- related features for...security features from Microsoft’s SQL Server. [1] 3. System Requirements Access 2000 is a resource-intensive application as are all Office 2000...1] • Modules – Functions and procedures written in the Visual Basic for Applications ( VBA ) programming language. The capabilities of modules

  11. MERRA/AS: The MERRA Analytic Services Project Interim Report

    NASA Technical Reports Server (NTRS)

    Schnase, John; Duffy, Dan; Tamkin, Glenn; Nadeau, Denis; Thompson, Hoot; Grieg, Cristina; Luczak, Ed; McInerney, Mark

    2013-01-01

    MERRA AS is a cyberinfrastructure resource that will combine iRODS-based Climate Data Server (CDS) capabilities with Coudera MapReduce to serve MERRA analytic products, store the MERRA reanalysis data collection in an HDFS to enable parallel, high-performance, storage-side data reductions, manage storage-side driver, mapper, reducer code sets and realized objects for users, and provide a library of commonly used spatiotemporal operations that can be composed to enable higher-order analyses.

  12. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  13. dbWGFP: a database and web server of human whole-genome single nucleotide variants and their functional predictions.

    PubMed

    Wu, Jiaxin; Wu, Mengmeng; Li, Lianshuo; Liu, Zhuo; Zeng, Wanwen; Jiang, Rui

    2016-01-01

    The recent advancement of the next generation sequencing technology has enabled the fast and low-cost detection of all genetic variants spreading across the entire human genome, making the application of whole-genome sequencing a tendency in the study of disease-causing genetic variants. Nevertheless, there still lacks a repository that collects predictions of functionally damaging effects of human genetic variants, though it has been well recognized that such predictions play a central role in the analysis of whole-genome sequencing data. To fill this gap, we developed a database named dbWGFP (a database and web server of human whole-genome single nucleotide variants and their functional predictions) that contains functional predictions and annotations of nearly 8.58 billion possible human whole-genome single nucleotide variants. Specifically, this database integrates 48 functional predictions calculated by 17 popular computational methods and 44 valuable annotations obtained from various data sources. Standalone software, user-friendly query services and free downloads of this database are available at http://bioinfo.au.tsinghua.edu.cn/dbwgfp. dbWGFP provides a valuable resource for the analysis of whole-genome sequencing, exome sequencing and SNP array data, thereby complementing existing data sources and computational resources in deciphering genetic bases of human inherited diseases. © The Author(s) 2016. Published by Oxford University Press.

  14. Telecardiology through ubiquitous internet services.

    PubMed

    Costa, Carlos; Oliveira, José Luís

    2012-09-01

    Implementation of telemedicine in many clinical scenarios improves the quality of care and patient safety. However, its use is hindered by operational, infrastructural and financial limitations. This paper describes the design and deployment of a plug-and-play telemedicine platform for cardiologic applications. The novelty of this work is that, instead of complex middleware, it uses a common electronic mailbox and its protocols to support the core of the telemedicine information system and associated data (ECG and medical images). A security model was also developed to ensure data privacy and confidentiality. The solution was validated in several real environments, in terms of performance, robustness, scalability and work efficiency. During the past three years it has been used on a daily basis by several small and medium-sized laboratories. The advantage of using an Internet service in opposition to a server-based infrastructure is that it does not require IT resources to set up the telemedicine centre. A doctor can configure and operate the centre with the same simplicity as any other Internet browser application. The solution is currently in use to support remote diagnosis and reports of ECG and Echocardiography in Portugal and Angola. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Investing in youth tobacco control: a review of smoking prevention and control strategies.

    PubMed

    Lantz, P M; Jacobson, P D; Warner, K E; Wasserman, J; Pollack, H A; Berson, J; Ahlstrom, A

    2000-03-01

    To provide a comprehensive review of interventions and policies aimed at reducing youth cigarette smoking in the United States, including strategies that have undergone evaluation and emerging innovations that have not yet been assessed for efficacy. Medline literature searches, books, reports, electronic list servers, and interviews with tobacco control advocates. Interventions and policy approaches that have been assessed or evaluated were categorised using a typology with seven categories (school based, community interventions, mass media/public education, advertising restrictions, youth access restrictions, tobacco excise taxes, and direct restrictions on smoking). Novel and largely untested interventions were described using nine categories. Youth smoking prevention and control efforts have had mixed results. However, this review suggests a number of prevention strategies that are promising, especially if conducted in a coordinated way to take advantage of potential synergies across interventions. Several types of strategies warrant additional attention and evaluation, including aggressive media campaigns, teen smoking cessation programmes, social environment changes, community interventions, and increasing cigarette prices. A significant proportion of the resources obtained from the recent settlement between 46 US states and the tobacco industry should be devoted to expanding, improving and evaluating "youth centred" tobacco prevention and control activities.

  16. Topographic mapping data semantics through data conversion and enhancement: Chapter 7

    USGS Publications Warehouse

    Varanka, Dalia; Carter, Jonathan; Usery, E. Lynn; Shoberg, Thomas; Edited by Ashish, Naveen; Sheth, Amit P.

    2011-01-01

    This paper presents research on the semantics of topographic data for triples and ontologies to blend the capabilities of the Semantic Web and The National Map of the U.S. Geological Survey. Automated conversion of relational topographic data of several geographic sample areas to the triple data model standard resulted in relatively poor semantic associations. Further research employed vocabularies of feature type and spatial relation terms. A user interface was designed to model the capture of non-standard terms relevant to public users and to map those terms to existing data models of The National Map through the use of ontology. Server access for the study area triple stores was made publicly available, illustrating how the development of linked data may transform institutional policies to open government data resources to the public. This paper presents these data conversion and research techniques that were tested as open linked data concepts leveraged through a user-centered interface and open USGS server access to the public.

  17. The Standard Autonomous File Server, a Customized, Off-the-Shelf Success Story

    NASA Technical Reports Server (NTRS)

    Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper will describe the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system his been so successful, it is becoming a NASA standard resource, leading to its nomination for NASA's Software or the Year Award in 1999.

  18. Research on realization scheme of interactive voice response (IVR) system

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Zhu, Guangxi

    2003-12-01

    In this paper, a novel interactive voice response (IVR) system is proposed, which is apparently different from the traditional. Using software operation and network control, the IVR system is presented which only depends on software in the server in which the system lies and the hardware in network terminals on user side, such as gateway (GW), personal gateway (PG), PC and so on. The system transmits the audio using real time protocol (RTP) protocol via internet to the network terminals and controls flow using finite state machine (FSM) stimulated by H.245 massages sent from user side and the system control factors. Being compared with other existing schemes, this IVR system results in several advantages, such as greatly saving the system cost, fully utilizing the existing network resources and enhancing the flexibility. The system is capable to be put in any service server anywhere in the Internet and even fits for the wireless applications based on packet switched communication. The IVR system has been put into reality and passed the system test.

  19. MyDas, an Extensible Java DAS Server

    PubMed Central

    Jimenez, Rafael C.; Quinn, Antony F.; Jenkinson, Andrew M.; Mulder, Nicola; Martin, Maria; Hunter, Sarah; Hermjakob, Henning

    2012-01-01

    A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS) defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users. We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details. PMID:23028496

  20. A network-based training environment: a medical image processing paradigm.

    PubMed

    Costaridou, L; Panayiotakis, G; Sakellaropoulos, P; Cavouras, D; Dimopoulos, J

    1998-01-01

    The capability of interactive multimedia and Internet technologies is investigated with respect to the implementation of a distance learning environment. The system is built according to a client-server architecture, based on the Internet infrastructure, composed of server nodes conceptually modelled as WWW sites. Sites are implemented by customization of available components. The environment integrates network-delivered interactive multimedia courses, network-based tutoring, SIG support, information databases of professional interest, as well as course and tutoring management. This capability has been demonstrated by means of an implemented system, validated with digital image processing content, specifically image enhancement. Image enhancement methods are theoretically described and applied to mammograms. Emphasis is given to the interactive presentation of the effects of algorithm parameters on images. The system end-user access depends on available bandwidth, so high-speed access can be achieved via LAN or local ISDN connections. Network based training offers new means of improved access and sharing of learning resources and expertise, as promising supplements in training.

  1. Patome: a database server for biological sequence annotation and analysis in issued patents and published patent applications.

    PubMed

    Lee, Byungwook; Kim, Taehyung; Kim, Seon-Kyu; Lee, Kwang H; Lee, Doheon

    2007-01-01

    With the advent of automated and high-throughput techniques, the number of patent applications containing biological sequences has been increasing rapidly. However, they have attracted relatively little attention compared to other sequence resources. We have built a database server called Patome, which contains biological sequence data disclosed in patents and published applications, as well as their analysis information. The analysis is divided into two steps. The first is an annotation step in which the disclosed sequences were annotated with RefSeq database. The second is an association step where the sequences were linked to Entrez Gene, OMIM and GO databases, and their results were saved as a gene-patent table. From the analysis, we found that 55% of human genes were associated with patenting. The gene-patent table can be used to identify whether a particular gene or disease is related to patenting. Patome is available at http://www.patome.org/; the information is updated bimonthly.

  2. Patome: a database server for biological sequence annotation and analysis in issued patents and published patent applications

    PubMed Central

    Lee, Byungwook; Kim, Taehyung; Kim, Seon-Kyu; Lee, Kwang H.; Lee, Doheon

    2007-01-01

    With the advent of automated and high-throughput techniques, the number of patent applications containing biological sequences has been increasing rapidly. However, they have attracted relatively little attention compared to other sequence resources. We have built a database server called Patome, which contains biological sequence data disclosed in patents and published applications, as well as their analysis information. The analysis is divided into two steps. The first is an annotation step in which the disclosed sequences were annotated with RefSeq database. The second is an association step where the sequences were linked to Entrez Gene, OMIM and GO databases, and their results were saved as a gene–patent table. From the analysis, we found that 55% of human genes were associated with patenting. The gene–patent table can be used to identify whether a particular gene or disease is related to patenting. Patome is available at ; the information is updated bimonthly. PMID:17085479

  3. m2-ABKS: Attribute-Based Multi-Keyword Search over Encrypted Personal Health Records in Multi-Owner Setting.

    PubMed

    Miao, Yinbin; Ma, Jianfeng; Liu, Ximeng; Wei, Fushan; Liu, Zhiquan; Wang, Xu An

    2016-11-01

    Online personal health record (PHR) is more inclined to shift data storage and search operations to cloud server so as to enjoy the elastic resources and lessen computational burden in cloud storage. As multiple patients' data is always stored in the cloud server simultaneously, it is a challenge to guarantee the confidentiality of PHR data and allow data users to search encrypted data in an efficient and privacy-preserving way. To this end, we design a secure cryptographic primitive called as attribute-based multi-keyword search over encrypted personal health records in multi-owner setting to support both fine-grained access control and multi-keyword search via Ciphertext-Policy Attribute-Based Encryption. Formal security analysis proves our scheme is selectively secure against chosen-keyword attack. As a further contribution, we conduct empirical experiments over real-world dataset to show its feasibility and practicality in a broad range of actual scenarios without incurring additional computational burden.

  4. MyDas, an extensible Java DAS server.

    PubMed

    Salazar, Gustavo A; García, Leyla J; Jones, Philip; Jimenez, Rafael C; Quinn, Antony F; Jenkinson, Andrew M; Mulder, Nicola; Martin, Maria; Hunter, Sarah; Hermjakob, Henning

    2012-01-01

    A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS) defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users.We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details.

  5. CMS distributed data analysis with CRAB3

    NASA Astrophysics Data System (ADS)

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-01

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  6. BUSCA: an integrative web server to predict subcellular localization of proteins.

    PubMed

    Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Profiti, Giuseppe; Casadio, Rita

    2018-04-30

    Here, we present BUSCA (http://busca.biocomp.unibo.it), a novel web server that integrates different computational tools for predicting protein subcellular localization. BUSCA combines methods for identifying signal and transit peptides (DeepSig and TPpred3), GPI-anchors (PredGPI) and transmembrane domains (ENSEMBLE3.0 and BetAware) with tools for discriminating subcellular localization of both globular and membrane proteins (BaCelLo, MemLoci and SChloro). Outcomes from the different tools are processed and integrated for annotating subcellular localization of both eukaryotic and bacterial protein sequences. We benchmark BUSCA against protein targets derived from recent CAFA experiments and other specific data sets, reporting performance at the state-of-the-art. BUSCA scores better than all other evaluated methods on 2732 targets from CAFA2, with a F1 value equal to 0.49 and among the best methods when predicting targets from CAFA3. We propose BUSCA as an integrated and accurate resource for the annotation of protein subcellular localization.

  7. The NIF DISCO Framework: Facilitating Automated Integration of Neuroscience Content on the Web

    PubMed Central

    Marenco, Luis; Wang, Rixin; Shepherd, Gordon M.; Miller, Perry L.

    2013-01-01

    This paper describes the capabilities of DISCO, an extensible approach that supports integrative Web-based information dissemination. DISCO is a component of the Neuroscience Information Framework (NIF), an NIH Neuroscience Blueprint initiative that facilitates integrated access to diverse neuroscience resources via the Internet. DISCO facilitates the automated maintenance of several distinct capabilities using a collection of files 1) that are maintained locally by the developers of participating neuroscience resources and 2) that are “harvested” on a regular basis by a central DISCO server. This approach allows central NIF capabilities to be updated as each resource’s content changes over time. DISCO currently supports the following capabilities: 1) resource descriptions, 2) “LinkOut” to a resource’s data items from NCBI Entrez resources such as PubMed, 3) Web-based interoperation with a resource, 4) sharing a resource’s lexicon and ontology, 5) sharing a resource’s database schema, and 6) participation by the resource in neuroscience-related RSS news dissemination. The developers of a resource are free to choose which DISCO capabilities their resource will participate in. Although DISCO is used by NIF to facilitate neuroscience data integration, its capabilities have general applicability to other areas of research. PMID:20387131

  8. Secure Dynamic access control scheme of PHR in cloud computing.

    PubMed

    Chen, Tzer-Shyong; Liu, Chia-Hui; Chen, Tzer-Long; Chen, Chin-Sheng; Bau, Jian-Guo; Lin, Tzu-Ching

    2012-12-01

    With the development of information technology and medical technology, medical information has been developed from traditional paper records into electronic medical records, which have now been widely applied. The new-style medical information exchange system "personal health records (PHR)" is gradually developed. PHR is a kind of health records maintained and recorded by individuals. An ideal personal health record could integrate personal medical information from different sources and provide complete and correct personal health and medical summary through the Internet or portable media under the requirements of security and privacy. A lot of personal health records are being utilized. The patient-centered PHR information exchange system allows the public autonomously maintain and manage personal health records. Such management is convenient for storing, accessing, and sharing personal medical records. With the emergence of Cloud computing, PHR service has been transferred to storing data into Cloud servers that the resources could be flexibly utilized and the operation cost can be reduced. Nevertheless, patients would face privacy problem when storing PHR data into Cloud. Besides, it requires a secure protection scheme to encrypt the medical records of each patient for storing PHR into Cloud server. In the encryption process, it would be a challenge to achieve accurately accessing to medical records and corresponding to flexibility and efficiency. A new PHR access control scheme under Cloud computing environments is proposed in this study. With Lagrange interpolation polynomial to establish a secure and effective PHR information access scheme, it allows to accurately access to PHR with security and is suitable for enormous multi-users. Moreover, this scheme also dynamically supports multi-users in Cloud computing environments with personal privacy and offers legal authorities to access to PHR. From security and effectiveness analyses, the proposed PHR access scheme in Cloud computing environments is proven flexible and secure and could effectively correspond to real-time appending and deleting user access authorization and appending and revising PHR records.

  9. Design and implementation of streaming media server cluster based on FFMpeg.

    PubMed

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.

  10. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg

    PubMed Central

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187

  11. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    NASA Technical Reports Server (NTRS)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining real-time support. An important aspect of the paper will involve challenges and lessons learned. product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support. This paper will also address the deployment approach including user involvement in testing and the , This includes COTS product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support.

  12. Environmental Monitoring Using Sensor Networks

    NASA Astrophysics Data System (ADS)

    Yang, J.; Zhang, C.; Li, X.; Huang, Y.; Fu, S.; Acevedo, M. F.

    2008-12-01

    Environmental observatories, consisting of a variety of sensor systems, computational resources and informatics, are important for us to observe, model, predict, and ultimately help preserve the health of the nature. The commoditization and proliferation of coin-to-palm sized wireless sensors will allow environmental monitoring with unprecedented fine spatial and temporal resolution. Once scattered around, these sensors can identify themselves, locate their positions, describe their functions, and self-organize into a network. They communicate through wireless channel with nearby sensors and transmit data through multi-hop protocols to a gateway, which can forward information to a remote data server. In this project, we describe an environmental observatory called Texas Environmental Observatory (TEO) that incorporates a sensor network system with intertwined wired and wireless sensors. We are enhancing and expanding the existing wired weather stations to include wireless sensor networks (WSNs) and telemetry using solar-powered cellular modems. The new WSNs will monitor soil moisture and support long-term hydrologic modeling. Hydrologic models are helpful in predicting how changes in land cover translate into changes in the stream flow regime. These models require inputs that are difficult to measure over large areas, especially variables related to storm events, such as soil moisture antecedent conditions and rainfall amount and intensity. This will also contribute to improve rainfall estimations from meteorological radar data and enhance hydrological forecasts. Sensor data are transmitted from monitoring site to a Central Data Collection (CDC) Server. We incorporate a GPRS modem for wireless telemetry, a single-board computer (SBC) as Remote Field Gateway (RFG) Server, and a WSN for distributed soil moisture monitoring. The RFG provides effective control, management, and coordination of two independent sensor systems, i.e., a traditional datalogger-based wired sensor system and the WSN-based wireless sensor system. The RFG also supports remote manipulation of the devices in the field such as the SBC, datalogger, and WSN. Sensor data collected from the distributed monitoring stations are stored in a database (DB) Server. The CDC Server acts as an intermediate component to hide the heterogeneity of different devices and support data validation required by the DB Server. Daemon programs running on the CDC Server pre-process the data before it is inserted into the database, and periodically perform synchronization tasks. A SWE-compliant data repository is installed to enable data exchange, accepting data from both internal DB Server and external sources through the OGC web services. The web portal, i.e. TEO Online, serves as a user-friendly interface for data visualization, analysis, synthesis, modeling, and K-12 educational outreach activities. It also provides useful capabilities for system developers and operators to remotely monitor system status and remotely update software and system configuration, which greatly simplifies the system debugging and maintenance tasks. We also implement Sensor Observation Services (SOS) at this layer, conforming to the SWE standard to facilitate data exchange. The standard SensorML/O&M data representation makes it easy to integrate our sensor data into the existing Geographic Information Systems (GIS) web services and exchange the data with other organizations.

  13. NeuroMorpho.Org implementation of digital neuroscience: dense coverage and integration with the NIF

    PubMed Central

    Halavi, Maryam; Polavaram, Sridevi; Donohue, Duncan E.; Hamilton, Gail; Hoyt, Jeffrey; Smith, Kenneth P.; Ascoli, Giorgio A.

    2009-01-01

    Neuronal morphology affects network connectivity, plasticity, and information processing. Uncovering the design principles and functional consequences of dendritic and axonal shape necessitates quantitative analysis and computational modeling of detailed experimental data. Digital reconstructions provide the required neuromorphological descriptions in a parsimonious, comprehensive, and reliable numerical format. NeuroMorpho.Org is the largest web-accessible repository service for digitally reconstructed neurons and one of the integrated resources in the Neuroscience Information Framework (NIF). Here we describe the NeuroMorpho.Org approach as an exemplary experience in designing, creating, populating, and curating a neuroscience digital resource. The simple three-tier architecture of NeuroMorpho.Org (web client, web server, and relational database) encompasses all necessary elements to support a large-scale, integrate-able repository. The data content, while heterogeneous in scientific scope and experimental origin, is unified in format and presentation by an in house standardization protocol. The server application (MRALD) is secure, customizable, and developer-friendly. Centralized processing and expert annotation yields a comprehensive set of metadata that enriches and complements the raw data. The thoroughly tested interface design allows for optimal and effective data search and retrieval. Availability of data in both original and standardized formats ensures compatibility with existing resources and fosters further tool development. Other key functions enable extensive exploration and discovery, including 3D and interactive visualization of branching, frequently measured morphometrics, and reciprocal links to the original PubMed publications. The integration of NeuroMorpho.Org with version-1 of the NIF (NIFv1) provides the opportunity to access morphological data in the context of other relevant resources and diverse subdomains of neuroscience, opening exciting new possibilities in data mining and knowledge discovery. The outcome of such coordination is the rapid and powerful advancement of neuroscience research at both the conceptual and technological level. PMID:18949582

  14. NeuroMorpho.Org implementation of digital neuroscience: dense coverage and integration with the NIF.

    PubMed

    Halavi, Maryam; Polavaram, Sridevi; Donohue, Duncan E; Hamilton, Gail; Hoyt, Jeffrey; Smith, Kenneth P; Ascoli, Giorgio A

    2008-09-01

    Neuronal morphology affects network connectivity, plasticity, and information processing. Uncovering the design principles and functional consequences of dendritic and axonal shape necessitates quantitative analysis and computational modeling of detailed experimental data. Digital reconstructions provide the required neuromorphological descriptions in a parsimonious, comprehensive, and reliable numerical format. NeuroMorpho.Org is the largest web-accessible repository service for digitally reconstructed neurons and one of the integrated resources in the Neuroscience Information Framework (NIF). Here we describe the NeuroMorpho.Org approach as an exemplary experience in designing, creating, populating, and curating a neuroscience digital resource. The simple three-tier architecture of NeuroMorpho.Org (web client, web server, and relational database) encompasses all necessary elements to support a large-scale, integrate-able repository. The data content, while heterogeneous in scientific scope and experimental origin, is unified in format and presentation by an in house standardization protocol. The server application (MRALD) is secure, customizable, and developer-friendly. Centralized processing and expert annotation yields a comprehensive set of metadata that enriches and complements the raw data. The thoroughly tested interface design allows for optimal and effective data search and retrieval. Availability of data in both original and standardized formats ensures compatibility with existing resources and fosters further tool development. Other key functions enable extensive exploration and discovery, including 3D and interactive visualization of branching, frequently measured morphometrics, and reciprocal links to the original PubMed publications. The integration of NeuroMorpho.Org with version-1 of the NIF (NIFv1) provides the opportunity to access morphological data in the context of other relevant resources and diverse subdomains of neuroscience, opening exciting new possibilities in data mining and knowledge discovery. The outcome of such coordination is the rapid and powerful advancement of neuroscience research at both the conceptual and technological level.

  15. Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud.

    PubMed

    Zia Ullah, Qazi; Hassan, Shahzad; Khan, Gul Muhammad

    2017-01-01

    Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers.

  16. Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud

    PubMed Central

    Hassan, Shahzad; Khan, Gul Muhammad

    2017-01-01

    Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers. PMID:28811819

  17. Integration and Exposure of Large Scale Computational Resources Across the Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.

    2015-12-01

    As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.

  18. Real Students and Virtual Field Trips

    NASA Astrophysics Data System (ADS)

    de Paor, D. G.; Whitmeyer, S. J.; Bailey, J. E.; Schott, R. C.; Treves, R.; Scientific Team Of Www. Digitalplanet. Org

    2010-12-01

    Field trips have always been one of the major attractions of geoscience education, distinguishing courses in geology, geography, oceanography, etc., from laboratory-bound sciences such as nuclear physics or biochemistry. However, traditional field trips have been limited to regions with educationally useful exposures and to student populations with the necessary free time and financial resources. Two-year or commuter colleges serving worker-students cannot realistically insist on completion of field assignments and even well-endowed universities cannot take students to more than a handful of the best available field localities. Many instructors have attempted to bring the field into the classroom with the aid of technology. So-called Virtual Field Trips (VFTs) cannot replace the real experience for those that experience it but they are much better than nothing at all. We have been working to create transformative improvements in VFTs using four concepts: (i) self-drive virtual vehicles that students use to navigate the virtual globe under their own control; (ii) GigaPan outcrops that reveal successively more details views of key locations; (iii) virtual specimens scanned from real rocks, minerals, and fossils; and (iv) embedded assessment via logging of student actions. Students are represented by avatars of their own choosing and travel either together in a virtual field vehicle, or separately. When they approach virtual outcrops, virtual specimens become collectable and can be examined using Javascript controls that change magnification and orientation. These instructional resources are being made available via a new server under the domain name www.DigitalPlanet.org. The server will log student progress and provide immediate feedback. We aim to disseminate these resources widely and welcome feedback from instructors and students.

  19. Digital spatial soil and land information for agriculture development

    NASA Astrophysics Data System (ADS)

    Sharma, R. K.; Laghathe, Pankaj; Meena, Ranglal; Barman, Alok Kumar; Das, Satyendra Nath

    2006-12-01

    Natural resource management calls for study of natural system prevailing in the country. In India floods and droughts visit regularly, causing extensive damages of natural wealth including agriculture that are crucial for sustenance of economic growth. The Indian Sub-continent drained by many major rivers and their tributaries where watershed, the hydrological unit forms a natural system that allows management and development of land resources following natural harmony. Acquisition of various kinds and levels of soil and land characteristics using both conventional and remote sensing techniques and subsequent development of digital spatial data base are essential to evolve strategy for planning watershed development programmes, their monitoring and impact evaluation. The multi-temporal capability of remote sensing sensors helps to update the existing data base which are of dynamic in nature. The paper outlines the concept of spatial data base development, generation using remote sensing techniques, designing of data structure, standardization and integration with watershed layers and various non spatial attribute data for various applications covering watershed development planning, alternate land use planning, soil and water conservation, diversified agriculture practices, generation of soil health card, soil and land reclamation, etc. The soil and land characteristics are vital to derive various interpretative groupings or master table that helps to generate the desired level of information of various clients using the GIS platform. The digital spatial data base on soils and watersheds generated by All India Soil and Land Use Survey will act as a sub-server of the main GIS based Web Server being hoisted by the planning commission for application of spatial data for planning purposes under G2G domain. It will facilitate e-governance for natural resource management using modern technology.

  20. San Mateo County's Server Information Program (S.I.P.): A Community-Based Alcohol Server Training Program.

    ERIC Educational Resources Information Center

    de Miranda, John

    The field of alcohol server awareness and training has grown dramatically in the past several years and the idea of training servers to reduce alcohol problems has become a central fixture in the current alcohol policy debate. The San Mateo County, California Server Information Program (SIP) is a community-based prevention strategy designed to…

Top