Sample records for n-tier client server

  1. Optimal Resource Allocation under Fair QoS in Multi-tier Server Systems

    NASA Astrophysics Data System (ADS)

    Akai, Hirokazu; Ushio, Toshimitsu; Hayashi, Naoki

    Recent development of network technology realizes multi-tier server systems, where several tiers perform functionally different processing requested by clients. It is an important issue to allocate resources of the systems to clients dynamically based on their current requests. On the other hand, Q-RAM has been proposed for resource allocation in real-time systems. In the server systems, it is important that execution results of all applications requested by clients are the same QoS(quality of service) level. In this paper, we extend Q-RAM to multi-tier server systems and propose a method for optimal resource allocation with fairness of the QoS levels of clients’ requests. We also consider an assignment problem of physical machines to be sleep in each tier sothat the energy consumption is minimized.

  2. Client - server programs analysis in the EPOCA environment

    NASA Astrophysics Data System (ADS)

    Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano

    1996-09-01

    Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.

  3. UNIX based client/server hospital information system.

    PubMed

    Nakamura, S; Sakurai, K; Uchiyama, M; Yoshii, Y; Tachibana, N

    1995-01-01

    SMILE (St. Luke's Medical Center Information Linkage Environment) is a HIS which is a client/server system using a UNIX workstation under an open network, LAN(FDDI&10BASE-T). It provides a multivendor environment, high performance with low cost and a user-friendly GUI. However, the client/server architecture with a UNIX workstation does not have the same OLTP environment (ex. TP monor) as the mainframe. So, our system problems and the steps used to solve them were reviewed. Several points that are necessary for a client/server system with a UNIX workstation in the future are presented.

  4. Group-oriented coordination models for distributed client-server computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Hughes, Craig S.

    1994-01-01

    This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.

  5. SQLGEN: a framework for rapid client-server database application development.

    PubMed

    Nadkarni, P M; Cheung, K H

    1995-12-01

    SQLGEN is a framework for rapid client-server relational database application development. It relies on an active data dictionary on the client machine that stores metadata on one or more database servers to which the client may be connected. The dictionary generates dynamic Structured Query Language (SQL) to perform common database operations; it also stores information about the access rights of the user at log-in time, which is used to partially self-configure the behavior of the client to disable inappropriate user actions. SQLGEN uses a microcomputer database as the client to store metadata in relational form, to transiently capture server data in tables, and to allow rapid application prototyping followed by porting to client-server mode with modest effort. SQLGEN is currently used in several production biomedical databases.

  6. Implementation of Sensor Twitter Feed Web Service Server and Client

    DTIC Science & Technology

    2016-12-01

    ARL-TN-0807 ● DEC 2016 US Army Research Laboratory Implementation of Sensor Twitter Feed Web Service Server and Client by...Implementation of Sensor Twitter Feed Web Service Server and Client by Bhagyashree V Kulkarni University of Maryland Michael H Lee Computational...

  7. Usage of Thin-Client/Server Architecture in Computer Aided Education

    ERIC Educational Resources Information Center

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  8. Realizing the Potential of Information Resources: Information, Technology, and Services. Track 3: Serving Clients with Client/Server.

    ERIC Educational Resources Information Center

    CAUSE, Boulder, CO.

    Eight papers are presented from the 1995 CAUSE conference track on client/server issues faced by managers of information technology at colleges and universities. The papers include: (1) "The Realities of Client/Server Development and Implementation" (Mary Ann Carr and Alan Hartwig), which examines Carnegie Mellon University's transition…

  9. Client/Server Architecture Promises Radical Changes.

    ERIC Educational Resources Information Center

    Freeman, Grey; York, Jerry

    1991-01-01

    This article discusses the emergence of the client/server paradigm for the delivery of computer applications, its emergence in response to the proliferation of microcomputers and local area networks, the applicability of the model in academic institutions, and its implications for college campus information technology organizations. (Author/DB)

  10. Client-Server Connection Status Monitoring Using Ajax Push Technology

    NASA Technical Reports Server (NTRS)

    Lamongie, Julien R.

    2008-01-01

    This paper describes how simple client-server connection status monitoring can be implemented using Ajax (Asynchronous JavaScript and XML), JSF (Java Server Faces) and ICEfaces technologies. This functionality is required for NASA LCS (Launch Control System) displays used in the firing room for the Constellation project. Two separate implementations based on two distinct approaches are detailed and analyzed.

  11. Client-Server: What Is It and Are We There Yet?

    ERIC Educational Resources Information Center

    Gershenfeld, Nancy

    1995-01-01

    Discusses client-server architecture in dumb terminals, personal computers, local area networks, and graphical user interfaces. Focuses on functions offered by client personal computers: individualized environments; flexibility in running operating systems; advanced operating system features; multiuser environments; and centralized data…

  12. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    ERIC Educational Resources Information Center

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  13. From Server to Desktop: Capital and Institutional Planning for Client/Server Technology.

    ERIC Educational Resources Information Center

    Mullig, Richard M.; Frey, Keith W.

    1994-01-01

    Beginning with a request for an enhanced system for decision/strategic planning support, the University of Chicago's biological sciences division has developed a range of administrative client/server tools, instituted a capital replacement plan for desktop technology, and created a planning and staffing approach enabling rapid introduction of new…

  14. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven

  15. Volume serving and media management in a networked, distributed client/server environment

    NASA Technical Reports Server (NTRS)

    Herring, Ralph H.; Tefend, Linda L.

    1993-01-01

    The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.

  16. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  17. A Configurable Internet Telemetry Server / Remote Client System

    NASA Astrophysics Data System (ADS)

    Boyd, W. T.; Hopkins, A.; Abbott, M. J.; Girouard, F. R.

    2000-05-01

    We have created a general, object-oriented software framework in Java for remote viewing of telemetry over the Internet. The general system consists of a data server and a remote client that can be extended by any project that uses telemetry to implement a remote telemetry viewer. We have implemented a system that serves live telemetry from NASA's Extreme Ultraviolet Explorer satellite and a client that can display the telemetry at a remote location. An authenticated user may run a standalone graphical or text-based client, or an applet on a web page, to view EUVE telemetry. In the case of the GUI client, a user can build displays to his/her own specifications using a GUI view-building tool. This work was supported by grants NCC2-947 and NCC2-966 from NASA Ames Research Center and grant JPL-960684 from NASA Jet Propulsion Laboratory.

  18. GrayStarServer: Server-side Spectrum Synthesis with a Browser-based Client-side User Interface

    NASA Astrophysics Data System (ADS)

    Short, C. Ian

    2016-10-01

    We present GrayStarServer (GSS), a stellar atmospheric modeling and spectrum synthesis code of pedagogical accuracy that is accessible in any web browser on commonplace computational devices and that runs on a timescale of a few seconds. The addition of spectrum synthesis annotated with line identifications extends the functionality and pedagogical applicability of GSS beyond that of its predecessor, GrayStar3 (GS3). The spectrum synthesis is based on a line list acquired from the NIST atomic spectra database, and the GSS post-processing and user interface client allows the user to inspect the plain text ASCII version of the line list, as well as to apply macroscopic broadening. Unlike GS3, GSS carries out the physical modeling on the server side in Java, and communicates with the JavaScript and HTML client via an asynchronous HTTP request. We also describe other improvements beyond GS3 such as a more physical treatment of background opacity and atmospheric physics, the comparison of key results with those of the Phoenix code, and the use of the HTML < {canvas}> element for higher quality plotting and rendering of results. We also present LineListServer, a Java code for converting custom ASCII line lists in NIST format to the byte data type file format required by GSS so that users can prepare their own custom line lists. We propose a standard for marking up and packaging model atmosphere and spectrum synthesis output for data transmission and storage that will facilitate a web-based approach to stellar atmospheric modeling and spectrum synthesis. We describe some pedagogical demonstrations and exercises enabled by easily accessible, on-demand, responsive spectrum synthesis. GSS may serve as a research support tool by providing quick spectroscopic reconnaissance. GSS may be found at www.ap.smu.ca/~ishort/OpenStars/GrayStarServer/grayStarServer.html, and source tarballs for local installations of both GSS and LineListServer may be found at www.ap.smu.ca/~ishort/OpenStars/.

  19. An Efficient Authenticated Key Transfer Scheme in Client-Server Networks

    NASA Astrophysics Data System (ADS)

    Shi, Runhua; Zhang, Shun

    2017-10-01

    In this paper, we presented a novel authenticated key transfer scheme in client-server networks, which can achieve two secure goals of remote user authentication and the session key establishment between the remote user and the server. Especially, the proposed scheme can subtly provide two fully different authentications: identity-base authentication and anonymous authentication, while the remote user only holds a private key. Furthermore, our scheme only needs to transmit 1-round messages from the remote user to the server, thus it is very efficient in communication complexity. In addition, the most time-consuming computation in our scheme is elliptic curve scalar point multiplication, so it is also feasible even for mobile devices.

  20. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valassi, A.; /CERN; Bartoldus, R.

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farmmore » of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.« less

  1. An ECG storage and retrieval system embedded in client server HIS utilizing object-oriented DB.

    PubMed

    Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S

    1996-02-01

    In the University of Tokyo Hospital, the improved client server HIS has been applied to clinical practice and physicians can order prescription, laboratory examination, ECG examination and radiographic examination, etc. directly by themselves and read results of these examinations, except medical signal waves, schema and image, on UNIX workstations. Recently, we designed and developed an ECG storage and retrieval system embedded in the client server HIS utilizing object-oriented database to take the first step in dealing with digitized signal, schema and image data and show waves, graphics, and images directly to physicians by the client server HIS. The system was developed based on object-oriented analysis and design, and implemented with object-oriented database management system (OODMS) and C++ programming language. In this paper, we describe the ECG data model, functions of the storage and retrieval system, features of user interface and the result of its implementation in the HIS.

  2. Concept locator: a client-server application for retrieval of UMLS metathesaurus concepts through complex boolean query.

    PubMed

    Nadkarni, P M

    1997-08-01

    Concept Locator (CL) is a client-server application that accesses a Sybase relational database server containing a subset of the UMLS Metathesaurus for the purpose of retrieval of concepts corresponding to one or more query expressions supplied to it. CL's query grammar permits complex Boolean expressions, wildcard patterns, and parenthesized (nested) subexpressions. CL translates the query expressions supplied to it into one or more SQL statements that actually perform the retrieval. The generated SQL is optimized by the client to take advantage of the strengths of the server's query optimizer, and sidesteps its weaknesses, so that execution is reasonably efficient.

  3. Flexible server architecture for resource-optimal presentation of Internet multimedia streams to the client

    NASA Astrophysics Data System (ADS)

    Boenisch, Holger; Froitzheim, Konrad

    1999-12-01

    The transfer of live media streams such as video and audio over the Internet is subject to several problems, static and dynamic by nature. Important quality of service (QoS) parameters do not only differ between various receivers depending on their network access, service provider, and nationality, the QoS is also variable in time. Moreover the installed receiver base is heterogeneous with respect to operating system, browser or client software, and browser version. We present a new concept for serving live media streams. It is not longer based on the current one-size-fits all paradigm, where the server offers just one stream. Our compresslet system takes the opposite approach: it builds media streams `to order' and `just in time'. Every client subscribing to a media stream uses a servlet loaded into the media server to generate a tailored data stream for his resources and constraints. The server is designed such that commonly used components for media streams are computed once. The compresslets use these prefabricated components, code additional data if necessary, and construct the data stream based on the dynamic available QoS and other client constraints. A client-specific encoding leads to resource- optimal presentation that is especially useful for the presentation of complex multimedia documents on a variety of output devices.

  4. TogoDoc server/client system: smart recommendation and efficient management of life science literature.

    PubMed

    Iwasaki, Wataru; Yamamoto, Yasunori; Takagi, Toshihisa

    2010-12-13

    In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.

  5. TogoDoc Server/Client System: Smart Recommendation and Efficient Management of Life Science Literature

    PubMed Central

    Takagi, Toshihisa

    2010-01-01

    In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the “tsunami” of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom. PMID:21179453

  6. On-demand rendering of an oblique slice through 3D volumetric data using JPEG2000 client-server framework

    NASA Astrophysics Data System (ADS)

    Joshi, Rajan L.

    2006-03-01

    In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.

  7. Solid Waste Information and Tracking System Client Server Conversion Project Management Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    GLASSCOCK, J.A.

    2000-02-10

    The Project Management Plan governing the conversion of SWITS to a client-server architecture. The PMP describes the background, planning and management of the SWITS conversion. Requirements and specification documentation needed for the SWITS conversion

  8. PONDEROSA-C/S: client-server based software package for automated protein 3D structure determination.

    PubMed

    Lee, Woonghee; Stark, Jaime L; Markley, John L

    2014-11-01

    Peak-picking Of Noe Data Enabled by Restriction Of Shift Assignments-Client Server (PONDEROSA-C/S) builds on the original PONDEROSA software (Lee et al. in Bioinformatics 27:1727-1728. doi: 10.1093/bioinformatics/btr200, 2011) and includes improved features for structure calculation and refinement. PONDEROSA-C/S consists of three programs: Ponderosa Server, Ponderosa Client, and Ponderosa Analyzer. PONDEROSA-C/S takes as input the protein sequence, a list of assigned chemical shifts, and nuclear Overhauser data sets ((13)C- and/or (15)N-NOESY). The output is a set of assigned NOEs and 3D structural models for the protein. Ponderosa Analyzer supports the visualization, validation, and refinement of the results from Ponderosa Server. These tools enable semi-automated NMR-based structure determination of proteins in a rapid and robust fashion. We present examples showing the use of PONDEROSA-C/S in solving structures of four proteins: two that enable comparison with the original PONDEROSA package, and two from the Critical Assessment of automated Structure Determination by NMR (Rosato et al. in Nat Methods 6:625-626. doi: 10.1038/nmeth0909-625 , 2009) competition. The software package can be downloaded freely in binary format from http://pine.nmrfam.wisc.edu/download_packages.html. Registered users of the National Magnetic Resonance Facility at Madison can submit jobs to the PONDEROSA-C/S server at http://ponderosa.nmrfam.wisc.edu, where instructions, tutorials, and instructions can be found. Structures are normally returned within 1-2 days.

  9. A Rich Client-Server Based Framework for Convenient Security and Management of Mobile Applications

    NASA Astrophysics Data System (ADS)

    Badan, Stephen; Probst, Julien; Jaton, Markus; Vionnet, Damien; Wagen, Jean-Frédéric; Litzistorf, Gérald

    Contact lists, Emails, SMS or custom applications on a professional smartphone could hold very confidential or sensitive information. What could happen in case of theft or accidental loss of such devices? Such events could be detected by the separation between the smartphone and a Bluetooth companion device. This event should typically block the applications and delete personal and sensitive data. Here, a solution is proposed based on a secured framework application running on the mobile phone as a rich client connected to a security server. The framework offers strong and customizable authentication and secured connectivity. A security server manages all security issues. User applications are then loaded via the framework. User data can be secured, synchronized, pushed or pulled via the framework. This contribution proposes a convenient although secured environment based on a client-server architecture using external authentications. Several features of the proposed system are exposed and a practical demonstrator is described.

  10. A Comparison Between Publish-and-Subscribe and Client-Server Models in Distributed Control System Networks

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)

    1998-01-01

    The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.

  11. Reference-frame-independent quantum-key-distribution server with a telecom tether for an on-chip client.

    PubMed

    Zhang, P; Aungskunsiri, K; Martín-López, E; Wabnig, J; Lobino, M; Nock, R W; Munns, J; Bonneau, D; Jiang, P; Li, H W; Laing, A; Rarity, J G; Niskanen, A O; Thompson, M G; O'Brien, J L

    2014-04-04

    We demonstrate a client-server quantum key distribution (QKD) scheme. Large resources such as laser and detectors are situated at the server side, which is accessible via telecom fiber to a client requiring only an on-chip polarization rotator, which may be integrated into a handheld device. The detrimental effects of unstable fiber birefringence are overcome by employing the reference-frame-independent QKD protocol for polarization qubits in polarization maintaining fiber, where standard QKD protocols fail, as we show for comparison. This opens the way for quantum enhanced secure communications between companies and members of the general public equipped with handheld mobile devices, via telecom-fiber tethering.

  12. An Application Server for Scientific Collaboration

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Luetkemeyer, Kelly G.

    1998-11-01

    Tech-X Corporation has developed SciChat, an application server for scientific collaboration. Connections are made to the server through a Java client, that can either be an application or an applet served in a web page. Once connected, the client may choose to start or join a session. A session includes not only other clients, but also an application. Any client can send a command to the application. This command is executed on the server and echoed to all clients. The results of the command, whether numerical or graphical, are then distributed to all of the clients; thus, multiple clients can interact collaboratively with a single application. The client is developed in Java, the server in C++, and the middleware is the Common Object Request Broker Architecture. In this system, the Graphical User Interface processing is on the client machine, so one does not have the disadvantages of insufficient bandwidth as occurs when running X over the internet. Because the server, client, and middleware are object oriented, new types of servers and clients specialized to particular scientific applications are more easily developed.

  13. D-Light on promoters: a client-server system for the analysis and visualization of cis-regulatory elements

    PubMed Central

    2013-01-01

    Background The binding of transcription factors to DNA plays an essential role in the regulation of gene expression. Numerous experiments elucidated binding sequences which subsequently have been used to derive statistical models for predicting potential transcription factor binding sites (TFBS). The rapidly increasing number of genome sequence data requires sophisticated computational approaches to manage and query experimental and predicted TFBS data in the context of other epigenetic factors and across different organisms. Results We have developed D-Light, a novel client-server software package to store and query large amounts of TFBS data for any number of genomes. Users can add small-scale data to the server database and query them in a large scale, genome-wide promoter context. The client is implemented in Java and provides simple graphical user interfaces and data visualization. Here we also performed a statistical analysis showing what a user can expect for certain parameter settings and we illustrate the usage of D-Light with the help of a microarray data set. Conclusions D-Light is an easy to use software tool to integrate, store and query annotation data for promoters. A public D-Light server, the client and server software for local installation and the source code under GNU GPL license are available at http://biwww.che.sbg.ac.at/dlight. PMID:23617301

  14. Benchmark of Client and Server-Side Catchment Delineation Approaches on Web-Based Systems

    NASA Astrophysics Data System (ADS)

    Demir, I.; Sermet, M. Y.; Sit, M. A.

    2016-12-01

    Recent advances in internet and cyberinfrastructure technologies have provided the capability to acquire large scale spatial data from various gauges and sensor networks. The collection of environmental data increased demand for applications which are capable of managing and processing large-scale and high-resolution data sets. With the amount and resolution of data sets provided, one of the challenging tasks for organizing and customizing hydrological data sets is delineation of watersheds on demand. Watershed delineation is a process for creating a boundary that represents the contributing area for a specific control point or water outlet, with intent of characterization and analysis of portions of a study area. Although many GIS tools and software for watershed analysis are available on desktop systems, there is a need for web-based and client-side techniques for creating a dynamic and interactive environment for exploring hydrological data. In this project, we demonstrated several watershed delineation techniques on the web with various techniques implemented on the client-side using JavaScript and WebGL, and on the server-side using Python and C++. We also developed a client-side GPGPU (General Purpose Graphical Processing Unit) algorithm to analyze high-resolution terrain data for watershed delineation which allows parallelization using GPU. The web-based real-time analysis of watershed segmentation can be helpful for decision-makers and interested stakeholders while eliminating the need of installing complex software packages and dealing with large-scale data sets. Utilization of the client-side hardware resources also eliminates the need of servers due its crowdsourcing nature. Our goal for future work is to improve other hydrologic analysis methods such as rain flow tracking by adapting presented approaches.

  15. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1996-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  16. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1997-12-09

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  17. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1999-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  18. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  19. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1997-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  20. Analysis of Java Client/Server and Web Programming Tools for Development of Educational Systems.

    ERIC Educational Resources Information Center

    Muldner, Tomasz

    This paper provides an analysis of old and new programming tools for development of client/server programs, particularly World Wide Web-based programs. The focus is on development of educational systems that use interactive shared workspaces to provide portable and expandable solutions. The paper begins with a short description of relevant terms.…

  1. Development of a Personal Digital Assistant (PDA) based client/server NICU patient data and charting system.

    PubMed

    Carroll, A E; Saluja, S; Tarczy-Hornoch, P

    2001-01-01

    Personal Digital Assistants (PDAs) offer clinicians the ability to enter and manage critical information at the point of care. Although PDAs have always been designed to be intuitive and easy to use, recent advances in technology have made them even more accessible. The ability to link data on a PDA (client) to a central database (server) allows for near-unlimited potential in developing point of care applications and systems for patient data management. Although many stand-alone systems exist for PDAs, none are designed to work in an integrated client/server environment. This paper describes the design, software and hardware selection, and preliminary testing of a PDA based patient data and charting system for use in the University of Washington Neonatal Intensive Care Unit (NICU). This system will be the subject of a subsequent study to determine its impact on patient outcomes and clinician efficiency.

  2. Secure entanglement distillation for double-server blind quantum computation.

    PubMed

    Morimae, Tomoyuki; Fujii, Keisuke

    2013-07-12

    Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client's input, output, and program. If the client interacts with only a single server, the client has to have some minimum quantum power, such as the ability of emitting randomly rotated single-qubit states or the ability of measuring states. If the client interacts with two servers who share Bell pairs but cannot communicate with each other, the client can be completely classical. For such a double-server scheme, two servers have to share clean Bell pairs, and therefore the entanglement distillation is necessary in a realistic noisy environment. In this Letter, we show that it is possible to perform entanglement distillation in the double-server scheme without degrading the security of blind quantum computing.

  3. Triple-server blind quantum computation using entanglement swapping

    NASA Astrophysics Data System (ADS)

    Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua

    2014-04-01

    Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.

  4. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  5. Design of information systems for population data collection based on client-server at Bagolo village

    NASA Astrophysics Data System (ADS)

    Nugraha, Ucu

    2017-06-01

    Village is the level under the sub-district level in the governmental system in a region where the information system of population data service is majority provided in a manual system. However, such systems frequently lead to invalid data in addition to the available data that does not correspond to the facts as the impact of frequent errors in the process of data collection related to population including the data of the elderly and the process of data transfer. Similarly, the data correspondences such as death certificate, birth certificate, a certificate of domicile change, and so forth, have their own problems. Data archives are frequently non-systematic because they are not organized properly or not stored in a database. Nevertheless, information service system for population census at this level can assist government agencies, especially in the management of population census at the village level. A designed system can make the process of a population census easier. It is initiated by the submission of population letter by each citizen who comes to the village administrative office. Population census information system based on client-server at Bagolo Village was designed in effective and non-complicated workflow and interface design. By using the client-server as the basis, the data will be stored centrally on the server, so it can reduce data duplication and data loss. Therefore, when the local governments require data information related to the population data of a village, they can obtain it easily without the need to collect the data directly at the respective village.

  6. Caching Servers for ATLAS

    NASA Astrophysics Data System (ADS)

    Gardner, R. W.; Hanushevsky, A.; Vukotic, I.; Yang, W.

    2017-10-01

    As many LHC Tier-3 and some Tier-2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.

  7. A Smartphone Client-Server Teleradiology System for Primary Diagnosis of Acute Stroke

    PubMed Central

    2011-01-01

    Background Recent advances in the treatment of acute ischemic stroke have made rapid acquisition, visualization, and interpretation of images a key factor for positive patient outcomes. We have developed a new teleradiology system based on a client-server architecture that enables rapid access to interactive advanced 2-D and 3-D visualization on a current generation smartphone device (Apple iPhone or iPod Touch, or an Android phone) without requiring patient image data to be stored on the device. Instead, a server loads and renders the patient images, then transmits a rendered frame to the remote device. Objective Our objective was to determine if a new smartphone client-server teleradiology system is capable of providing accuracies and interpretation times sufficient for diagnosis of acute stroke. Methods This was a retrospective study. We obtained 120 recent consecutive noncontrast computed tomography (NCCT) brain scans and 70 computed tomography angiogram (CTA) head scans from the Calgary Stroke Program database. Scans were read by two neuroradiologists, one on a medical diagnostic workstation and an iPod or iPhone (hereafter referred to as an iOS device) and the other only on an iOS device. NCCT brain scans were evaluated for early signs of infarction, which includes early parenchymal ischemic changes and dense vessel sign, and to exclude acute intraparenchymal hemorrhage and stroke mimics. CTA brain scans were evaluated for any intracranial vessel occlusion. The interpretations made on an iOS device were compared with those made at a workstation. The total interpretation times were recorded for both platforms. Interrater agreement was assessed. True positives, true negatives, false positives, and false negatives were obtained, and sensitivity, specificity, and accuracy of detecting the abnormalities on the iOS device were computed. Results The sensitivity, specificity, and accuracy of detecting intraparenchymal hemorrhage were 100% using the iOS device with a

  8. A smartphone client-server teleradiology system for primary diagnosis of acute stroke.

    PubMed

    Mitchell, J Ross; Sharma, Pranshu; Modi, Jayesh; Simpson, Mark; Thomas, Monroe; Hill, Michael D; Goyal, Mayank

    2011-05-06

    Recent advances in the treatment of acute ischemic stroke have made rapid acquisition, visualization, and interpretation of images a key factor for positive patient outcomes. We have developed a new teleradiology system based on a client-server architecture that enables rapid access to interactive advanced 2-D and 3-D visualization on a current generation smartphone device (Apple iPhone or iPod Touch, or an Android phone) without requiring patient image data to be stored on the device. Instead, a server loads and renders the patient images, then transmits a rendered frame to the remote device. Our objective was to determine if a new smartphone client-server teleradiology system is capable of providing accuracies and interpretation times sufficient for diagnosis of acute stroke. This was a retrospective study. We obtained 120 recent consecutive noncontrast computed tomography (NCCT) brain scans and 70 computed tomography angiogram (CTA) head scans from the Calgary Stroke Program database. Scans were read by two neuroradiologists, one on a medical diagnostic workstation and an iPod or iPhone (hereafter referred to as an iOS device) and the other only on an iOS device. NCCT brain scans were evaluated for early signs of infarction, which includes early parenchymal ischemic changes and dense vessel sign, and to exclude acute intraparenchymal hemorrhage and stroke mimics. CTA brain scans were evaluated for any intracranial vessel occlusion. The interpretations made on an iOS device were compared with those made at a workstation. The total interpretation times were recorded for both platforms. Interrater agreement was assessed. True positives, true negatives, false positives, and false negatives were obtained, and sensitivity, specificity, and accuracy of detecting the abnormalities on the iOS device were computed. The sensitivity, specificity, and accuracy of detecting intraparenchymal hemorrhage were 100% using the iOS device with a perfect interrater agreement (kappa=1

  9. The EarthServer project: Exploiting Identity Federations, Science Gateways and Social and Mobile Clients for Big Earth Data Analysis

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Messina, Antonio; Pappalardo, Marco; Passaro, Gianluca

    2013-04-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. Six Lighthouse Applications are being established in EarthServer, each of which poses distinct challenges on Earth Data Analytics: Cryospheric Science, Airborne Science, Atmospheric Science, Geology, Oceanography, and Planetary Science. Altogether, they cover all Earth Science domains; the Planetary Science use case has been added to challenge concepts and standards in non-standard environments. In addition, EarthLook (maintained by Jacobs University) showcases use of OGC standards in 1D through 5D use cases. In this contribution we will report on the first applications integrated in the EarthServer Science Gateway and on the clients for mobile appliances developed to access them. We will also show how federated and social identity services can allow Big Earth Data Providers to expose their data in a distributed environment keeping a strict and fine-grained control on user authentication and authorisation. The degree of fulfilment of the EarthServer implementation with the recommendations made in the recent TERENA Study on

  10. Verifying the secure setup of Unix client/servers and detection of network intrusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feingold, R.; Bruestle, H.R.; Bartoletti, T.

    1995-07-01

    This paper describes our technical approach to developing and delivering Unix host- and network-based security products to meet the increasing challenges in information security. Today`s global ``Infosphere`` presents us with a networked environment that knows no geographical, national, or temporal boundaries, and no ownership, laws, or identity cards. This seamless aggregation of computers, networks, databases, applications, and the like store, transmit, and process information. This information is now recognized as an asset to governments, corporations, and individuals alike. This information must be protected from misuse. The Security Profile Inspector (SPI) performs static analyses of Unix-based clients and servers to checkmore » on their security configuration. SPI`s broad range of security tests and flexible usage options support the needs of novice and expert system administrators alike. SPI`s use within the Department of Energy and Department of Defense has resulted in more secure systems, less vulnerable to hostile intentions. Host-based information protection techniques and tools must also be supported by network-based capabilities. Our experience shows that a weak link in a network of clients and servers presents itself sooner or later, and can be more readily identified by dynamic intrusion detection techniques and tools. The Network Intrusion Detector (NID) is one such tool. NID is designed to monitor and analyze activity on an Ethernet broadcast Local Area Network segment and produce transcripts of suspicious user connections. NID`s retrospective and real-time modes have proven invaluable to security officers faced with ongoing attacks to their systems and networks.« less

  11. Verifying the secure setup of UNIX client/servers and detection of network intrusion

    NASA Astrophysics Data System (ADS)

    Feingold, Richard; Bruestle, Harry R.; Bartoletti, Tony; Saroyan, R. A.; Fisher, John M.

    1996-03-01

    This paper describes our technical approach to developing and delivering Unix host- and network-based security products to meet the increasing challenges in information security. Today's global `Infosphere' presents us with a networked environment that knows no geographical, national, or temporal boundaries, and no ownership, laws, or identity cards. This seamless aggregation of computers, networks, databases, applications, and the like store, transmit, and process information. This information is now recognized as an asset to governments, corporations, and individuals alike. This information must be protected from misuse. The Security Profile Inspector (SPI) performs static analyses of Unix-based clients and servers to check on their security configuration. SPI's broad range of security tests and flexible usage options support the needs of novice and expert system administrators alike. SPI's use within the Department of Energy and Department of Defense has resulted in more secure systems, less vulnerable to hostile intentions. Host-based information protection techniques and tools must also be supported by network-based capabilities. Our experience shows that a weak link in a network of clients and servers presents itself sooner or later, and can be more readily identified by dynamic intrusion detection techniques and tools. The Network Intrusion Detector (NID) is one such tool. NID is designed to monitor and analyze activity on the Ethernet broadcast Local Area Network segment and product transcripts of suspicious user connections. NID's retrospective and real-time modes have proven invaluable to security officers faced with ongoing attacks to their systems and networks.

  12. Asynchronous data change notification between database server and accelerator controls system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, W.; Morris, J.; Nemesure, S.

    2011-10-10

    Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to anymore » client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.« less

  13. Analysis of practical backoff protocols for contention resolution with multiple servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, L.A.; MacKenzie, P.D.

    Backoff protocols are probably the most widely used protocols for contention resolution in multiple access channels. In this paper, we analyze the stochastic behavior of backoff protocols for contention resolution among a set of clients and servers, each server being a multiple access channel that deals with contention like an Ethernet channel. We use the standard model in which each client generates requests for a given server according to a Bemoulli distribution with a specified mean. The client-server request rate of a system is the maximum over all client-server pairs (i, j) of the sum of all request rates associatedmore » with either client i or server j. Our main result is that any superlinear polynomial backoff protocol is stable for any multiple-server system with a sub-unit client-server request rate. We confirm the practical relevance of our result by demonstrating experimentally that the average waiting time of requests is very small when such a system is run with reasonably few clients and reasonably small request rates such as those that occur in actual ethernets. Our result is the first proof of stability for any backoff protocol for contention resolution with multiple servers. Our result is also the first proof that any weakly acknowledgment based protocol is stable for contention resolution with multiple servers and such high request rates. Two special cases of our result are of interest. Hastad, Leighton and Rogoff have shown that for a single-server system with a sub-unit client-server request rate any modified superlinear polynomial backoff protocol is stable. These modified backoff protocols are similar to standard backoff protocols but require more random bits to implement. The special case of our result in which there is only one server extends the result of Hastad, Leighton and Rogoff to standard (practical) backoff protocols. Finally, our result applies to dynamic routing in optical networks.« less

  14. Three-Dimensional Audio Client Library

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2005-01-01

    The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.

  15. Single-server blind quantum computation with quantum circuit model

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqian; Weng, Jian; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing; Song, Tingting

    2018-06-01

    Blind quantum computation (BQC) enables the client, who has few quantum technologies, to delegate her quantum computation to a server, who has strong quantum computabilities and learns nothing about the client's quantum inputs, outputs and algorithms. In this article, we propose a single-server BQC protocol with quantum circuit model by replacing any quantum gate with the combination of rotation operators. The trap quantum circuits are introduced, together with the combination of rotation operators, such that the server is unknown about quantum algorithms. The client only needs to perform operations X and Z, while the server honestly performs rotation operators.

  16. Markerless client-server augmented reality system with natural features

    NASA Astrophysics Data System (ADS)

    Ning, Shuangning; Sang, Xinzhu; Chen, Duo

    2017-10-01

    A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.

  17. FRIEND Engine Framework: a real time neurofeedback client-server system for neuroimaging studies

    PubMed Central

    Basilio, Rodrigo; Garrido, Griselda J.; Sato, João R.; Hoefle, Sebastian; Melo, Bruno R. P.; Pamplona, Fabricio A.; Zahn, Roland; Moll, Jorge

    2015-01-01

    In this methods article, we present a new implementation of a recently reported FSL-integrated neurofeedback tool, the standalone version of “Functional Real-time Interactive Endogenous Neuromodulation and Decoding” (FRIEND). We will refer to this new implementation as the FRIEND Engine Framework. The framework comprises a client-server cross-platform solution for real time fMRI and fMRI/EEG neurofeedback studies, enabling flexible customization or integration of graphical interfaces, devices, and data processing. This implementation allows a fast setup of novel plug-ins and frontends, which can be shared with the user community at large. The FRIEND Engine Framework is freely distributed for non-commercial, research purposes. PMID:25688193

  18. Experimental Blind Quantum Computing for a Classical Client.

    PubMed

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-04

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  19. Experimental Blind Quantum Computing for a Classical Client

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C.; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-01

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  20. The ISMARA client

    PubMed Central

    Ioannidis, Vassilios; van Nimwegen, Erik; Stockinger, Heinz

    2016-01-01

    ISMARA ( ismara.unibas.ch) automatically infers the key regulators and regulatory interactions from high-throughput gene expression or chromatin state data. However, given the large sizes of current next generation sequencing (NGS) datasets, data uploading times are a major bottleneck. Additionally, for proprietary data, users may be uncomfortable with uploading entire raw datasets to an external server. Both these problems could be alleviated by providing a means by which users could pre-process their raw data locally, transferring only a small summary file to the ISMARA server. We developed a stand-alone client application that pre-processes large input files (RNA-seq or ChIP-seq data) on the user's computer for performing ISMARA analysis in a completely automated manner, including uploading of small processed summary files to the ISMARA server. This reduces file sizes by up to a factor of 1000, and upload times from many hours to mere seconds. The client application is available from ismara.unibas.ch/ISMARA/client. PMID:28232860

  1. Java RMI Software Technology for the Payload Planning System of the International Space Station

    NASA Technical Reports Server (NTRS)

    Bryant, Barrett R.

    1999-01-01

    The Payload Planning System is for experiment planning on the International Space Station. The planning process has a number of different aspects which need to be stored in a database which is then used to generate reports on the planning process in a variety of formats. This process is currently structured as a 3-tier client/server software architecture comprised of a Java applet at the front end, a Java server in the middle, and an Oracle database in the third tier. This system presently uses CGI, the Common Gateway Interface, to communicate between the user-interface and server tiers and Active Data Objects (ADO) to communicate between the server and database tiers. This project investigated other methods and tools for performing the communications between the three tiers of the current system so that both the system performance and software development time could be improved. We specifically found that for the hardware and software platforms that PPS is required to run on, the best solution is to use Java Remote Method Invocation (RMI) for communication between the client and server and SQLJ (Structured Query Language for Java) for server interaction with the database. Prototype implementations showed that RMI combined with SQLJ significantly improved performance and also greatly facilitated construction of the communication software.

  2. Earthquake Early Warning Management based on Client-Server using Primary Wave data from Vibrating Sensor

    NASA Astrophysics Data System (ADS)

    Laumal, F. E.; Nope, K. B. N.; Peli, Y. S.

    2018-01-01

    Early warning is a warning mechanism before an actual incident occurs, can be implemented on natural events such as tsunamis or earthquakes. Earthquakes are classified in tectonic and volcanic types depend on the source and nature. The tremor in the form of energy propagates in all directions as Primary and Secondary waves. Primary wave as initial earthquake vibrations propagates longitudinally, while the secondary wave propagates like as a sinusoidal wave after Primary, destructive and as a real earthquake. To process the primary vibration data captured by the earthquake sensor, a network management required client computer to receives primary data from sensors, authenticate and forward to a server computer to set up an early warning system. With the water propagation concept, a method of early warning system has been determined in which some sensors are located on the same line, sending initial vibrations as primary data on the same scale and the server recommended to the alarm sound as an early warning.

  3. Solid waste information and tracking system client-server conversion project management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, D.L.

    1998-04-15

    This Project Management Plan is the lead planning document governing the proposed conversion of the Solid Waste Information and Tracking System (SWITS) to a client-server architecture. This plan presents the content specified by American National Standards Institute (ANSI)/Institute of Electrical and Electronics Engineers (IEEE) standards for software development, with additional information categories deemed to be necessary to describe the conversion fully. This plan is a living document that will be reviewed on a periodic basis and revised when necessary to reflect changes in baseline design concepts and schedules. This PMP describes the background, planning and management of the SWITS conversion.more » It does not constitute a statement of product requirements. Requirements and specification documentation needed for the SWITS conversion will be released as supporting documents.« less

  4. Autoplot and the HAPI Server

    NASA Astrophysics Data System (ADS)

    Faden, J.; Vandegriff, J. D.; Weigel, R. S.

    2016-12-01

    Autoplot was introduced in 2008 as an easy-to-use plotting tool for the space physics community. It reads data from a variety of file resources, such as CDF and HDF files, and a number of specialized data servers, such as the PDS/PPI's DIT-DOS, CDAWeb, and from the University of Iowa's RPWG Das2Server. Each of these servers have optimized methods for transmitting data to display in Autoplot, but require coordination and specialized software to work, limiting Autoplot's ability to access new servers and datasets. Likewise, groups who would like software to access their APIs must either write thier own clients, or publish a specification document in hopes that people will write clients. The HAPI specification was written so that a simple, standard API could be used by both Autoplot and server implementations, to remove these barriers to free flow of time series data. Autoplot's software for communicating with HAPI servers is presented, showing the user interface scientists will use, and how data servers might implement the HAPI specification to provide access to their data. This will also include instructions on how Autoplot is used and installed desktop computers, and used to view data from the RBSP, Juno, and other missions.

  5. CommServer: A Communications Manager For Remote Data Sites

    NASA Astrophysics Data System (ADS)

    Irving, K.; Kane, D. L.

    2012-12-01

    CommServer is a software system that manages making connections to remote data-gathering stations, providing a simple network interface to client applications. The client requests a connection to a site by name, and the server establishes the connection, providing a bidirectional channel between the client and the target site if successful. CommServer was developed to manage networks of FreeWave serial data radios with multiple data sites, repeaters, and network-accessed base stations, and has been in continuous operational use for several years. Support for Iridium modems using RUDICS will be added soon, and no changes to the application interface are anticipated. CommServer is implemented on Linux using programs written in bash shell, Python, Perl, AWK, under a set of conventions we refer to as ThinObject.

  6. CheD: chemical database compilation tool, Internet server, and client for SQL servers.

    PubMed

    Trepalin, S V; Yarkov, A V

    2001-01-01

    An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.

  7. Deterministic entanglement distillation for secure double-server blind quantum computation.

    PubMed

    Sheng, Yu-Bo; Zhou, Lan

    2015-01-15

    Blind quantum computation (BQC) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation. The single-server BQC protocol requires the client to have some minimum quantum ability, while the double-server BQC protocol makes the client's device completely classical, resorting to the pure and clean Bell state shared by two servers. Here, we provide a deterministic entanglement distillation protocol in a practical noisy environment for the double-server BQC protocol. This protocol can get the pure maximally entangled Bell state. The success probability can reach 100% in principle. The distilled maximally entangled states can be remaind to perform the BQC protocol subsequently. The parties who perform the distillation protocol do not need to exchange the classical information and they learn nothing from the client. It makes this protocol unconditionally secure and suitable for the future BQC protocol.

  8. Deterministic entanglement distillation for secure double-server blind quantum computation

    PubMed Central

    Sheng, Yu-Bo; Zhou, Lan

    2015-01-01

    Blind quantum computation (BQC) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation. The single-server BQC protocol requires the client to have some minimum quantum ability, while the double-server BQC protocol makes the client's device completely classical, resorting to the pure and clean Bell state shared by two servers. Here, we provide a deterministic entanglement distillation protocol in a practical noisy environment for the double-server BQC protocol. This protocol can get the pure maximally entangled Bell state. The success probability can reach 100% in principle. The distilled maximally entangled states can be remaind to perform the BQC protocol subsequently. The parties who perform the distillation protocol do not need to exchange the classical information and they learn nothing from the client. It makes this protocol unconditionally secure and suitable for the future BQC protocol. PMID:25588565

  9. OPeNDAP servers like Hyrax and TDS can easily support common single-sign-on authentication protocols using the Apache httpd and related software; adding support for these protocols to clients can be more challenging

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Potter, N.; Evans, B. J. K.

    2016-12-01

    OPeNDAP, in conjunction with the Australian National University, documented the installation process needed to add authentication to OPeNDAP-enabled data servers (Hyrax, TDS, etc.) and examined 13 OPeNDAP clients to determine how best to add authentication using LDAP, Shibboleth and OAuth2 (we used NASA's URS). We settled on a server configuration (architecture) that uses the Apache web server and a collection of open-source modules to perform the authentication and authorization actions. This is not the only way to accomplish those goals, but using Apache represents a good balance between functionality, leveraging existing work that has been well vetted and includes support for a wide variety of web services, include those that depend on a servlet engine such as tomcat (which both Hyrax and TDS do). Or work shows how LDAP, OAuth2 and Shibboleth can all be accommodated using this readily available software stack. Also important is that the Apache software is very widely used and is fairly robust - extremely important for security software components. In order to make use of a server requiring authentication, clients must support the authentication process. Because HTTP has included authentication for well over a decade, and because HTTP/HTTPS can be used by simply linking programs with a library, both the LDAP and OAuth2/URS authentication schemes have almost universal support within the OPeNDAP client base. The clients, i.e. the HTTP client libraries they employ, understand how to submit the credentials to the correct server when confronted by an HTTP/S Unauthorized (401) response. Interestingly OAuth2 can achieve it's SSO objectives while relying entirely on normative HTTP transport. All 13 of the clients examined worked.The situation with Shibboleth is different. While Shibboleth does use HTTP, it also requires the client to either scrape a web page or support the SAML2.0 ECP profile, which, for programmatic clients, means using SOAP messages. Since working with

  10. A satellite-driven, client-server hydro-economic model prototype for agricultural water management

    NASA Astrophysics Data System (ADS)

    Maneta, Marco; Kimball, John; He, Mingzhu; Payton Gardner, W.

    2017-04-01

    Anticipating agricultural water demand, land reallocation, and impact on farm revenues associated with different policy or climate constraints is a challenge for water managers and for policy makers. While current integrated decision support systems based on programming methods provide estimates of farmer reaction to external constraints, they have important shortcomings such as the high cost of data collection surveys necessary to calibrate the model, biases associated with inadequate farm sampling, infrequent model updates and recalibration, model overfitting, or their deterministic nature, among other problems. In addition, the administration of water supplies and the generation of policies that promote sustainable agricultural regions depend on more than one bureau or office. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. To overcome these limitations, we present a client-server, integrated hydro-economic modeling and observation framework driven by satellite remote sensing and other ancillary information from regional monitoring networks. The core of the framework is a stochastic data assimilation system that sequentially ingests remote sensing observations and corrects the parameters of the hydro-economic model at unprecedented spatial and temporal resolutions. An economic model of agricultural production, based on mathematical programming, requires information on crop type and extent, crop yield, crop transpiration and irrigation technology. A regional hydro-climatologic model provides biophysical constraints to an economic model of agricultural production with a level of detail that permits the study of the spatial impact of large- and small-scale water use decisions. Crop type and extent is obtained from the Cropland Data Layer (CDL), which is multi-sensor operational classification of crops maintained by the United States Department of Agriculture. Because

  11. Understanding Customer Dissatisfaction with Underutilized Distributed File Servers

    NASA Technical Reports Server (NTRS)

    Riedel, Erik; Gibson, Garth

    1996-01-01

    An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.

  12. Multi-server blind quantum computation over collective-noise channels

    NASA Astrophysics Data System (ADS)

    Xiao, Min; Liu, Lin; Song, Xiuli

    2018-03-01

    Blind quantum computation (BQC) enables ordinary clients to securely outsource their computation task to costly quantum servers. Besides two essential properties, namely correctness and blindness, practical BQC protocols also should make clients as classical as possible and tolerate faults from nonideal quantum channel. In this paper, using logical Bell states as quantum resource, we propose multi-server BQC protocols over collective-dephasing noise channel and collective-rotation noise channel, respectively. The proposed protocols permit completely or almost classical client, meet the correctness and blindness requirements of BQC protocol, and are typically practical BQC protocols.

  13. Dynamic Server-Based KML Code Generator Method for Level-of-Detail Traversal of Geospatial Data

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory; Mixon, Brian; Linger, TIm

    2013-01-01

    Web-based geospatial client applications such as Google Earth and NASA World Wind must listen to data requests, access appropriate stored data, and compile a data response to the requesting client application. This process occurs repeatedly to support multiple client requests and application instances. Newer Web-based geospatial clients also provide user-interactive functionality that is dependent on fast and efficient server responses. With massively large datasets, server-client interaction can become severely impeded because the server must determine the best way to assemble data to meet the client applications request. In client applications such as Google Earth, the user interactively wanders through the data using visually guided panning and zooming actions. With these actions, the client application is continually issuing data requests to the server without knowledge of the server s data structure or extraction/assembly paradigm. A method for efficiently controlling the networked access of a Web-based geospatial browser to server-based datasets in particular, massively sized datasets has been developed. The method specifically uses the Keyhole Markup Language (KML), an Open Geospatial Consortium (OGS) standard used by Google Earth and other KML-compliant geospatial client applications. The innovation is based on establishing a dynamic cascading KML strategy that is initiated by a KML launch file provided by a data server host to a Google Earth or similar KMLcompliant geospatial client application user. Upon execution, the launch KML code issues a request for image data covering an initial geographic region. The server responds with the requested data along with subsequent dynamically generated KML code that directs the client application to make follow-on requests for higher level of detail (LOD) imagery to replace the initial imagery as the user navigates into the dataset. The approach provides an efficient data traversal path and mechanism that can be

  14. Network characteristics for server selection in online games

    NASA Astrophysics Data System (ADS)

    Claypool, Mark

    2008-01-01

    Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.

  15. LASP Time Series Server (LaTiS): Overcoming Data Access Barriers via a Common Data Model in the Middle Tier (Invited)

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Wilson, A.

    2010-12-01

    The Laboratory for Atmospheric and Space Physics at the University of Colorado has developed an Open Source, OPeNDAP compliant, Java Servlet based, RESTful web service to serve time series data. In addition to handling OPeNDAP style requests and returning standard responses, existing modules for alternate output formats can be reused or customized. It is also simple to reuse or customize modules to directly read various native data sources and even to perform some processing on the server. The server is built around a common data model based on the Unidata Common Data Model (CDM) which merges the NetCDF, HDF, and OPeNDAP data models. The server framework features a modular architecture that supports pluggable Readers, Writers, and Filters via the common interface to the data, enabling a workflow that reads data from their native form, performs some processing on the server, and presents the results to the client in its preferred form. The service is currently being used operationally to serve time series data for the LASP Interactive Solar Irradiance Data Center (LISIRD, http://lasp.colorado.edu/lisird/) and as part of the Time Series Data Server (TSDS, http://tsds.net/). I will present the data model and how it enables reading, writing, and processing concerns to be separated into loosely coupled components. I will also share thoughts for evolving beyond the time series abstraction and providing a general purpose data service that can be orchestrated into larger workflows.

  16. EarthServer - 3D Visualization on the Web

    NASA Astrophysics Data System (ADS)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  17. Installation of the National Transport Code Collaboration Data Server at the ITPA International Multi-tokamak Confinement Profile Database

    NASA Astrophysics Data System (ADS)

    Roach, Colin; Carlsson, Johan; Cary, John R.; Alexander, David A.

    2002-11-01

    The National Transport Code Collaboration (NTCC) has developed an array of software, including a data client/server. The data server, which is written in C++, serves local data (in the ITER Profile Database format) as well as remote data (by accessing one or several MDS+ servers). The client, a web-invocable Java applet, provides a uniform, intuitive, user-friendly, graphical interface to the data server. The uniformity of the interface relieves the user from the trouble of mastering the differences between different data formats and lets him/her focus on the essentials: plotting and viewing the data. The user runs the client by visiting a web page using any Java capable Web browser. The client is automatically downloaded and run by the browser. A reference to the data server is then retrieved via the standard Web protocol (HTTP). The communication between the client and the server is then handled by the mature, industry-standard CORBA middleware. CORBA has bindings for all common languages and many high-quality implementations are available (both Open Source and commercial). The NTCC data server has been installed at the ITPA International Multi-tokamak Confinement Profile Database, which is hosted by the UKAEA at Culham Science Centre. The installation of the data server is protected by an Internet firewall. To make it accessible to clients outside the firewall some modifications of the server were required. The working version of the ITPA confinement profile database is not open to the public. Authentification of legitimate users is done utilizing built-in Java security features to demand a password to download the client. We present an overview of the NTCC data client/server and some details of how the CORBA firewall-traversal issues were resolved and how the user authentification is implemented.

  18. Weight and body mass index among female contraceptive clients.

    PubMed

    Kohn, Julia E; Lopez, Priscilla M; Simons, Hannah R

    2015-06-01

    As obesity may affect the efficacy of some contraceptives, we examined weight, body mass index (BMI) and prevalence of obesity among female contraceptive clients at 231 U.S. health centers. A secondary aim was to analyze differences in contraceptive method use by obesity status. Cross-sectional study using de-identified electronic health record data from family planning centers. We analyzed contraceptive visits made by 147,336 females aged 15-44 years in 2013. A total of 46.1% of clients had BMI ≥25. Mean body weight was 154.4 lb (S.D.=41.9); mean BMI was 26.1 (S.D.=6.6). A total of 40% had BMI ≥26, when levonorgestrel emergency contraception may become less effective. Obese clients had higher odds of using a tier 1 or tier 3 contraceptive method and had lower odds of using a tier 2 or hormonal method than non-obese clients. About half of contraceptive clients would be categorized as overweight or obese. Contraceptive method choices differed by obesity status. About half of contraceptive clients in this study population were overweight or obese. Contraceptive method choices differed by obesity status. All women - regardless of body size - should receive unbiased, evidence-based counseling on the full range of contraceptive options so that they can make informed choices. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Sandia Text ANaLysis Extensible librarY Server

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2006-05-11

    This is a server wrapper for STANLEY (Sandia Text ANaLysis Extensible librarY). STANLEY provides capabilities for analyzing, indexing and searching through text. STANLEY Server exposes this capability through a TCP/IP interface allowing third party applications and remote clients to access it.

  20. Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment

    DTIC Science & Technology

    2013-06-01

    architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation

  1. Thin client (web browser)-based collaboration for medical imaging and web-enabled data.

    PubMed

    Le, Tuong Huu; Malhi, Nadeem

    2002-01-01

    Utilizing thin client software and open source server technology, a collaborative architecture was implemented allowing for sharing of Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images with real-time markup. Using the Web browser as a thin client integrated with standards-based components, such as DHTML (dynamic hypertext markup language), JavaScript, and Java, collaboration was achieved through a Web server/proxy server combination utilizing Java Servlets and Java Server Pages. A typical collaborative session involved the driver, who directed the navigation of the other collaborators, the passengers, and provided collaborative markups of medical and nonmedical images. The majority of processing was performed on the server side, allowing for the client to remain thin and more accessible.

  2. Breaking through with Thin-Client Technologies: A Cost Effective Approach for Academic Libraries.

    ERIC Educational Resources Information Center

    Elbaz, Sohair W.; Stewart, Christofer

    This paper provides an overview of thin-client/server computing in higher education. Thin-clients are like PCs in appearance, but they do not house hard drives or localized operating systems and cannot function without being connected to a server. Two types of thin-clients are described: the Network Computer (NC) and the Windows Terminal (WT).…

  3. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  4. JMS Proxy and C/C++ Client SDK

    NASA Technical Reports Server (NTRS)

    Wolgast, Paul; Pechkam, Paul

    2007-01-01

    JMS Proxy and C/C++ Client SDK (JMS signifies "Java messaging service" and "SDK" signifies "software development kit") is a software package for developing interfaces that enable legacy programs (here denoted "clients") written in the C and C++ languages to communicate with each other via a JMS broker. This package consists of two main components: the JMS proxy server component and the client C library SDK component. The JMS proxy server component implements a native Java process that receives and responds to requests from clients. This component can run on any computer that supports Java and a JMS client. The client C library SDK component is used to develop a JMS client program running in each affected C or C++ environment, without need for running a Java virtual machine in the affected computer. A C client program developed by use of this SDK has most of the quality-of-service characteristics of standard Java-based client programs, including the following: Durable subscriptions; Asynchronous message receipt; Such standard JMS message qualities as "TimeToLive," "Message Properties," and "DeliveryMode" (as the quoted terms are defined in previously published JMS documentation); and Automatic reconnection of a JMS proxy to a restarted JMS broker.

  5. A Satellite Data-Driven, Client-Server Decision Support Application for Agricultural Water Resources Management

    NASA Technical Reports Server (NTRS)

    Johnson, Lee F.; Maneta, Marco P.; Kimball, John S.

    2016-01-01

    Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in a typical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight 'app' that

  6. A satellite data-driven, client-server decision support application for agricultural water resources management

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Johnson, L.; Kimball, J. S.

    2016-12-01

    Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in atypical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight `app` that

  7. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  8. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  9. Analyzing CRISM hyperspectral imagery using PlanetServer.

    NASA Astrophysics Data System (ADS)

    Figuera, Ramiro Marco; Pham Huu, Bang; Minin, Mikhail; Flahaut, Jessica; Halder, Anik; Rossi, Angelo Pio

    2017-04-01

    Mineral characterization of planetary surfaces bears great importance for space exploration. In order to perform it, orbital hyperspectral imagery is widely used. In our research we use Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) [1] TRDR L observations with a spectral range of 1 to 4 µm. PlanetServer comprises a server, a web client and a Python client/API. The server side uses the Array DataBase Management System (DBMS) Raster Data Manager (Rasdaman) Community Edition [2]. OGC standards such as the Web Coverage Processing Service (WCPS) [3], an SQL-like language capable to query information along the image cube, are implemented in the PetaScope component [4]. The client side uses NASA's Web World Wind [5] allowing the user to access the data in an intuitive way. The client consists of a globe where all cubes are deployed, a main menu where projections, base maps and RGB combinations are provided, and a plot dock where the spectral information is shown. The RGB combinator tool allows to do band combination such as the CRISM products [6] using WCPS. The spectral information is retrieved using WCPS and shown in the plot dock/widget. The USGS splib06a library [7] is available to compare CRISM vs. laboratory spectra. The Python API provides an environment to create RGB combinations that can be embedded into existing pipelines. All employed libraries and tools are open source and can be easily adapted to other datasets. PlanetServer stands as a promising tool for spectral analysis on planetary bodies. M3/Moon and OMEGA datasets will be soon available. [1] S. Murchie et al., "Compact Connaissance Imaging Spectrometer for Mars (CRISM) on Mars Reconnaissance Orbiter (MRO)," J. Geophys. Res. E Planets,2007. [2] P. Baumann, A. Dehmel, P. Furtado, R. Ritsch, and N. Widmann, "The multidimensional database system RasDaMan," ACM SIGMOD Rec., vol. 27, no. 2, pp. 575-577, Jun. 1998. [3] P. Baumann, "The OGC web coverage processing service (WCPS) standard

  10. WMS Server 2.0

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Wood, James F.

    2012-01-01

    This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.

  11. Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data

    PubMed Central

    Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.

    2005-01-01

    The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787

  12. [The database server for the medical bibliography database at Charles University].

    PubMed

    Vejvalka, J; Rojíková, V; Ulrych, O; Vorísek, M

    1998-01-01

    In the medical community, bibliographic databases are widely accepted as a most important source of information both for theoretical and clinical disciplines. To improve access to medical bibliographic databases at Charles University, a database server (ERL by Silver Platter) was set up at the 2nd Faculty of Medicine in Prague. The server, accessible by Internet 24 hours/7 days, hosts now 14 years' MEDLINE and 10 years' EMBASE Paediatrics. Two different strategies are available for connecting to the server: a specialized client program that communicates over the Internet (suitable for professional searching) and a web-based access that requires no specialized software (except the WWW browser) on the client side. The server is now offered to academic community to host further databases, possibly subscribed by consortia whose individual members would not subscribe them by themselves.

  13. Assessing Server Fault Tolerance and Disaster Recovery Implementation in Thin Client Architectures

    DTIC Science & Technology

    2007-09-01

    server • Windows 2003 server Processor AMD Geode GX Memory 512MB Flash/256MB DDR RAM I/O/Peripheral Support • VGA-type video output (DB-15...2000 Advanced Server Processor AMD Geode NX 1500 Memory • 256MB or 512MB or 1GB DDR SDRAM • 1GB or 512MB Flash I/O/Peripheral Support • SiS741 GX

  14. GeneBee-net: Internet-based server for analyzing biopolymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, L.I.; Ivanov, V.V.; Nikolaev, V.K.

    This work describes a network server for searching databanks of biopolymer structures and performing other biocomputing procedures; it is available via direct Internet connection. Basic server procedures are dedicated to homology (similarity) search of sequence and 3D structure of proteins. The homologies found could be used to build multiple alignments, predict protein and RNA secondary structure, and construct phylogenetic trees. In addition to traditional methods of sequence similarity search, the authors propose {open_quotes}non-matrix{close_quotes} (correlational) search. An analogous approach is used to identify regions of similar tertiary structure of proteins. Algorithm concepts and usage examples are presented for new methods. Servicemore » logic is based upon interaction of a client program and server procedures. The client program allows the compilation of queries and the processing of results of an analysis.« less

  15. Declining Inconsistent Condom Use but Increasing HIV and Syphilis Prevalence Among Older Male Clients of Female Sex Workers

    PubMed Central

    Chen, Yi; Abraham Bussell, Scottie; Shen, Zhiyong; Tang, Zhenzhu; Lan, Guanghua; Zhu, Qiuying; Liu, Wei; Tang, Shuai; Li, Rongjian; Huang, Wenbo; Huang, Yuman; Liang, Fuxiong; Wang, Lu; Shao, Yiming; Ruan, Yuhua

    2016-01-01

    Abstract Clients of female sex workers (CFSWs) are a bridge population for the spread of HIV and syphilis to low or average risk heterosexuals. Most studies have examined the point prevalence of these infections in CFSWs. Limited evidence suggests that older age CFSWs are at a higher risk of acquiring sexually transmitted diseases compared with younger clients. Thus, we sought to describe long-term trends in HIV, syphilis, and hepatitis C (HCV) to better understand how these infections differ by sex worker classification and client age. We also examined trends in HIV, syphilis, and HCV among categories of female sex workers (FSWs). We conducted serial cross-sectional studies from 2010 to 2015 in Guangxi autonomous region, China. We collected demographic and behavior variables. FSWs and their clients were tested for HIV, syphilis, and HCV antibodies. Positive HIV and syphilis serologies were confirmed by Western blot and rapid plasma regain, respectively. Clients were categorized as middle age (40–49 years) and older clients (≥50 years). FSWs were categorized as high-tier, middle-tier, or low-tier based on the payment amount charged for sex and their work venue. Chi-square test for trends was used for testing changes in prevalence over time. By 2015, low-tier FSWs (LTFSWs) accounted for almost half of all FSWs; and they had the highest HIV prevalence at 1.4%. HIV prevalence declined significantly for FSWs (high-tier FSW, P = 0.003; middle-tier FSWs; P = 0.021; LTFSWs, P < 0.001). Syphilis infections significantly declined for FSWs (P < 0.001) but only to 7.3% for LTFSWs. HCV and intravenous drug use were uncommon in FSWs. HIV prevalence increased for older age clients (1.3%–2.0%, P = 0.159) while syphilis prevalence remained stable. HCV infections were halved among older clients in 3 years (1.7%–0.8%, P < 0.001). Condom use during the last sexual encounter increased for FSWs and CFSWs. Few clients reported sex with men or intravenous

  16. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  17. Client-Side Image Maps: Achieving Accessibility and Section 508 Compliance

    ERIC Educational Resources Information Center

    Beasley, William; Jarvis, Moana

    2004-01-01

    Image maps are a means of making a picture "clickable", so that different portions of the image can be hyperlinked to different URLS. There are two basic types of image maps: server-side and client-side. Besides requiring access to a CGI on the server, server-side image maps are undesirable from the standpoint of accessibility--creating…

  18. Recent improvements in the NASA technical report server

    NASA Technical Reports Server (NTRS)

    Maa, Ming-Hokng; Nelson, Michael L.

    1995-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web (WWW) report distribution service, has been modified to allow parallel database queries, significantly decreasing user access time by an average factor of 2.3, access from clients behind firewalls and/or proxies which truncate excessively long Uniform Resource Locators (URL's), access to non-Wide Area Information Server (WAIS) databases, and compatibility with the Z39-50.3 protocol.

  19. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  20. Performance of the High Sensitivity Open Source Multi-GNSS Assisted GNSS Reference Server.

    NASA Astrophysics Data System (ADS)

    Sarwar, Ali; Rizos, Chris; Glennon, Eamonn

    2015-06-01

    and accuracy. Three different configurations of Multi-GNSS assistance servers were used, namely Cloud-Client-Server, the Demilitarized Zone (DMZ) Client-Server and PC-Client-Server; with respect to the connectivity location of client and server. The impact on the performance based on server and/or client initiation, hardware capability, network latency, processing delay and computation times with their storage, scalability, processing and load sharing capabilities, were analysed. The performance of the OSGRS is compared against commercial GNSS, Assisted-GNSS and WSN-enabled GNSS devices. The OSGRS system demonstrated lower TTFF and higher availability.

  1. Understanding the usage of the Helioviewer Project clients and services

    NASA Astrophysics Data System (ADS)

    Ireland, J.; Zahniy, S.; Mueller, D.; Nicula, B.; Verstringe, F.; Bourgoignie, B.; Buchlin, E.; Alingery, P.

    2017-12-01

    The Helioviewer Project enables visual exploration of the Sun and the inner heliosphere for everyone, everywhere via intuitive interfaces and novel technology. The project mainly develops two clients, helioviewer.org and JHelioviewer, and the server-side capabilities accessed via those clients. Images from many different ground and space-based sources are currently available from multiple servers. Solar and heliospheric feature and event information, magnetic field extrapolations and important time-series can also be browsed and visualized using Helioviewer Project clients. Users of the Helioviewer Project have made over two million movies and many millions of screenshots since detailed (and anonymous) logging of Helioviewer Project usage was implemented in February 2011. These usage logs are analyzed to give a detailed breakdown on user interaction with solar and heliospheric data via Helioviewer Project clients and services. We present summary statistics on how our users are using our clients and services, which data they are interested in, and how they choose to interact with different data sources. At the poster presentation we will also be soliciting ideas from the community to improve our clients and services.

  2. Sirocco Storage Server v. pre-alpha 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew L.; Danielson, Geoffrey; Ward, H. Lee

    Sirocco is a parallel storage system under development, designed for write-intensive workloads on large-scale HPC platforms. It implements a keyvalue object store on top of a set of loosely federated storage servers that cooperate to ensure data integrity and performance. It includes support for a range of different types of storage transactions. This software release constitutes a conformant storage server, along with the client-side libraries to access the storage over a network.

  3. Advancing the Power and Utility of Server-Side Aggregation

    NASA Technical Reports Server (NTRS)

    Fulker, Dave; Gallagher, James

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters.Topics 2 through 7 will be relevant to data consumers, data providers andnotably, due to the open-source nature of all OPeNDAP softwareto developers wishing to extend Hyrax, to build compatible clients and servers, andor to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  4. General bulk service queueing system with N-policy, multiplevacations, setup time and server breakdown without interruption

    NASA Astrophysics Data System (ADS)

    Sasikala, S.; Indhira, K.; Chandrasekaran, V. M.

    2017-11-01

    In this paper, we have considered an MX / (a,b) / 1 queueing system with server breakdown without interruption, multiple vacations, setup times and N-policy. After a batch of service, if the size of the queue is ξ (< a), then the server immediately takes a vacation. Upon returns from a vacation, if the queue is less than N, then the server takes another vacation. This process continues until the server finds atleast N customers in the queue. After a vacation, if the server finds at least N customers waiting for service, then the server needs a setup time to start the service. After a batch of service, if the amount of waiting customers in the queue is ξ (≥ a) then the server serves a batch of min(ξ,b) customers, where b ≥ a. We derived the probability generating function of queue length at arbitrary time epoch. Further, we obtained some important performance measures.

  5. A client-server software for the identification of groundwater vulnerability to pesticides at regional level.

    PubMed

    Di Guardo, Andrea; Finizio, Antonio

    2015-10-15

    The groundwater VULnerability to PESticide software system (VULPES) is a user-friendly, GIS-based and client-server software developed to identify vulnerable areas to pesticides at regional level making use of pesticide fate models. It is a Decision Support System aimed to assist the public policy makers to investigate areas sensitive to specific substances and to propose limitations of use or mitigation measures. VULPES identify the so-called Uniform Geographical Unit (UGU) which are areas characterised by the same agro-environmental conditions. In each UGU it applies the PELMO model obtaining the 80th percentile of the substance concentration at 1 metre depth; then VULPES creates a vulnerability map in shapefile format which classifies the outputs comparing them with the lower threshold set to the legal limit concentration in groundwater (0.1 μg/l). This paper describes the software structure in details and a case study with the application of the terbuthylazine herbicide on the Lombardy region territory. Three zones with different degrees of vulnerabilities has been identified and described. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…

  7. EPICS Channel Access Server for LabVIEW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhukov, Alexander P.

    It can be challenging to interface National Instruments LabVIEW (http://www.ni.com/labview/) with EPICS (http://www.aps.anl.gov/epics/). Such interface is required when an instrument control program was developed in LabVIEW but it also has to be part of global control system. This is frequently useful in big accelerator facilities. The Channel Access Server is written in LabVIEW, so it works on any hardware/software platform where LabVIEW is available. It provides full server functionality, so any EPICS client can communicate with it.

  8. Energy Efficiency in Small Server Rooms: Field Surveys and Findings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Iris; Greenberg, Steve; Mahdavi, Roozbeh

    Fifty-seven percent of US servers are housed in server closets, server rooms, and localized data centers, in what are commonly referred to as small server rooms, which comprise 99percent of all server spaces in the US. While many mid-tier and enterprise-class data centers are owned by large corporations that consider energy efficiency a goal to minimize business operating costs, small server rooms typically are not similarly motivated. They are characterized by decentralized ownership and management and come in many configurations, which creates a unique set of efficiency challenges. To develop energy efficiency strategies for these spaces, we surveyed 30 smallmore » server rooms across eight institutions, and selected four of them for detailed assessments. The four rooms had Power Usage Effectiveness (PUE) values ranging from 1.5 to 2.1. Energy saving opportunities ranged from no- to low-cost measures such as raising cooling set points and better airflow management, to more involved but cost-effective measures including server consolidation and virtualization, and dedicated cooling with economizers. We found that inefficiencies mainly resulted from organizational rather than technical issues. Because of the inherent space and resource limitations, the most effective measure is to operate servers through energy-efficient cloud-based services or well-managed larger data centers, rather than server rooms. Backup power requirement, and IT and cooling efficiency should be evaluated to minimize energy waste in the server space. Utility programs are instrumental in raising awareness and spreading technical knowledge on server operation, and the implementation of energy efficiency measures in small server rooms.« less

  9. The SAPHIRE server: a new algorithm and implementation.

    PubMed Central

    Hersh, W.; Leone, T. J.

    1995-01-01

    SAPHIRE is an experimental information retrieval system implemented to test new approaches to automated indexing and retrieval of medical documents. Due to limitations in its original concept-matching algorithm, a modified algorithm has been implemented which allows greater flexibility in partial matching and different word order within concepts. With the concomitant growth in client-server applications and the Internet in general, the new algorithm has been implemented as a server that can be accessed via other applications on the Internet. PMID:8563413

  10. Developing Server-Side Infrastructure for Large-Scale E-Learning of Web Technology

    ERIC Educational Resources Information Center

    Simpkins, Neil

    2010-01-01

    The growth of E-business has made experience in server-side technology an increasingly important area for educators. Server-side skills are in increasing demand and recognised to be of relatively greater value than comparable client-side aspects (Ehie, 2002). In response to this, many educational organisations have developed E-business courses,…

  11. A comparison of Tier 1 and Tier 3 medical homes under Oklahoma Medicaid program.

    PubMed

    Kumar, Jay I; Anthony, Melody; Crawford, Steven A; Arky, Ronald A; Bitton, Asaf; Splinter, Garth L

    2014-04-01

    The patient-centered medical home (PCMH) is a team-based model of care that seeks to improve quality of care and control costs. The Oklahoma Health Care Authority (OHCA) directs Oklahoma's Medicaid program and contracts with 861 medical home practices across the state in one of three tiers of operational capacity: Tier 1 (Basic), Tier 2 (Advanced) and Tier 3 (Optimal). Only 13.5% (n = 116) homes are at the optimal level; the majority (59%, n = 508) at the basic level. In this study, we sought to determine the barriers that prevented Tier 1 homes from advancing to Tier 3 level and the incentives that would motivate providers to advance from Tier 1 to 3. Our hypotheses were that Tier 1 medical homes were located in smaller practices with limited resources and the providers are not convinced that the expense of advancing from Tier 1 status to Tier 3 status was worth the added value. We analyzed OHCA records to compare the 508 Tier 1 (entry-level) with 116 Tier 3 (optimal) medical homes for demographic differences with regards to location: urban or rural, duration as medical home, percentage of contracts that were group contracts, number of providers per group contract, panel age range, panel size, and member-provider ratio. We surveyed all 508 Tier 1 homes with a mail-in survey, and with focused follow up visits to identify the barriers to, and incentives for, upgrading from Tier 1 to Tier 2 or 3. We found that Tier 1 homes were more likely to be in rural areas, run by solo practitioners, serve exclusively adult panels, have smaller panel sizes, and have higher member-to-provider ratios in comparison with Tier 3 homes. Our survey had a 35% response rate. Results showed that the most difficult changes for Tier 1 homes to implement were providing 4 hours of after-hours care and a dedicated program for mental illness and substance abuse. The results also showed that the most compelling incentives for encouraging Tier 1 homes to upgrade their tier status were less

  12. Improvements to the National Transport Code Collaboration Data Server

    NASA Astrophysics Data System (ADS)

    Alexander, David A.

    2001-10-01

    The data server of the National Transport Code Colaboration Project provides a universal network interface to interpolated or raw transport data accessible by a universal set of names. Data can be acquired from a local copy of the Iternational Multi-Tokamak (ITER) profile database as well as from TRANSP trees of MDS Plus data systems on the net. Data is provided to the user's network client via a CORBA interface, thus providing stateful data server instances, which have the advantage of remembering the desired interpolation, data set, etc. This paper will review the status and discuss the recent improvements made to the data server, such as the modularization of the data server and the addition of hdf5 and MDS Plus data file writing capability.

  13. A Connection Admission Control Method for Web Server Systems

    NASA Astrophysics Data System (ADS)

    Satake, Shinsuke; Inai, Hiroshi; Saito, Tomoya; Arai, Tsuyoshi

    Most browsers establish multiple connections and download files in parallel to reduce the response time. On the other hand, a web server limits the total number of connections to prevent from being overloaded. That could decrease the response time, but would increase the loss probability, the probability of which a newly arriving client is rejected. This paper proposes a connection admission control method which accepts only one connection from a newly arriving client when the number of connections exceeds a threshold, but accepts new multiple connections when the number of connections is less than the threshold. Our method is aimed at reducing the response time by allowing as many clients as possible to establish multiple connections, and also reducing the loss probability. In order to reduce spending time to examine an adequate threshold for web server administrators, we introduce a procedure which approximately calculates the loss probability under a condition that the threshold is given. Via simulation, we validate the approximation and show effectiveness of the admission control.

  14. Towards Big Earth Data Analytics: The EarthServer Approach

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data

  15. Declining Inconsistent Condom Use but Increasing HIV and Syphilis Prevalence Among Older Male Clients of Female Sex Workers: Analysis From Sentinel Surveillance Sites (2010-2015), Guangxi, China.

    PubMed

    Chen, Yi; Abraham Bussell, Scottie; Shen, Zhiyong; Tang, Zhenzhu; Lan, Guanghua; Zhu, Qiuying; Liu, Wei; Tang, Shuai; Li, Rongjian; Huang, Wenbo; Huang, Yuman; Liang, Fuxiong; Wang, Lu; Shao, Yiming; Ruan, Yuhua

    2016-05-01

    Clients of female sex workers (CFSWs) are a bridge population for the spread of HIV and syphilis to low or average risk heterosexuals. Most studies have examined the point prevalence of these infections in CFSWs. Limited evidence suggests that older age CFSWs are at a higher risk of acquiring sexually transmitted diseases compared with younger clients. Thus, we sought to describe long-term trends in HIV, syphilis, and hepatitis C (HCV) to better understand how these infections differ by sex worker classification and client age. We also examined trends in HIV, syphilis, and HCV among categories of female sex workers (FSWs).We conducted serial cross-sectional studies from 2010 to 2015 in Guangxi autonomous region, China. We collected demographic and behavior variables. FSWs and their clients were tested for HIV, syphilis, and HCV antibodies. Positive HIV and syphilis serologies were confirmed by Western blot and rapid plasma regain, respectively. Clients were categorized as middle age (40-49 years) and older clients (≥50 years). FSWs were categorized as high-tier, middle-tier, or low-tier based on the payment amount charged for sex and their work venue. Chi-square test for trends was used for testing changes in prevalence over time.By 2015, low-tier FSWs (LTFSWs) accounted for almost half of all FSWs; and they had the highest HIV prevalence at 1.4%. HIV prevalence declined significantly for FSWs (high-tier FSW, P = 0.003; middle-tier FSWs; P = 0.021; LTFSWs, P < 0.001). Syphilis infections significantly declined for FSWs (P < 0.001) but only to 7.3% for LTFSWs. HCV and intravenous drug use were uncommon in FSWs. HIV prevalence increased for older age clients (1.3%-2.0%, P = 0.159) while syphilis prevalence remained stable. HCV infections were halved among older clients in 3 years (1.7%-0.8%, P < 0.001). Condom use during the last sexual encounter increased for FSWs and CFSWs. Few clients reported sex with men or intravenous drug use. Clients

  16. Selecting a Z39.50 Client or Web Gateway.

    ERIC Educational Resources Information Center

    Turner, Fay

    1998-01-01

    Provides a brief description of the Z39.50 information retrieval standard and reviews evaluation criteria and questions that should be asked when selecting a Z39.50 client. Areas for consideration include whether to buy or build a Z39.50 client, the end-user's requirements, connecting to a remote server, searching, managing the search response,…

  17. A genetic algorithm for replica server placement

    NASA Astrophysics Data System (ADS)

    Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl

    2012-01-01

    Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.

  18. A genetic algorithm for replica server placement

    NASA Astrophysics Data System (ADS)

    Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl

    2011-12-01

    Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.

  19. Client/Server data serving for high performance computing

    NASA Technical Reports Server (NTRS)

    Wood, Chris

    1994-01-01

    This paper will attempt to examine the industry requirements for shared network data storage and sustained high speed (10's to 100's to thousands of megabytes per second) network data serving via the NFS and FTP protocol suite. It will discuss the current structural and architectural impediments to achieving these sorts of data rates cost effectively today on many general purpose servers and will describe and architecture and resulting product family that addresses these problems. The sustained performance levels that were achieved in the lab will be shown as well as a discussion of early customer experiences utilizing both the HIPPI-IP and ATM OC3-IP network interfaces.

  20. WriteShield: A Pseudo Thin Client for Prevention of Information Leakage

    NASA Astrophysics Data System (ADS)

    Kirihata, Yasuhiro; Sameshima, Yoshiki; Onoyama, Takashi; Komoda, Norihisa

    While thin-client systems are diffusing as an effective security method in enterprises and organizations, there is a new approach called pseudo thin-client system. In this system, local disks of clients are write-protected and user data is forced to save on the central file server to realize the same security effect of conventional thin-client systems. Since it takes purely the software-based simple approach, it does not require the hardware enhancement of network and servers to reduce the installation cost. However there are several problems such as no write control to external media, memory depletion possibility, and lower security because of the exceptional write permission to the system processes. In this paper, we propose WriteShield, a pseudo thin-client system which solves these issues. In this system, the local disks are write-protected with volume filter driver and it has a virtual cache mechanism to extend the memory cache size for the write protection. This paper presents design and implementation details of WriteShield. Besides we describe the security analysis and simulation evaluation of paging algorithms for virtual cache mechanism and measure the disk I/O performance to verify its feasibility in the actual environment.

  1. OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software

    NASA Astrophysics Data System (ADS)

    Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.

    2006-12-01

    OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for

  2. A Network Design Architecture for Distribution of Generic Scene Graphs

    DTIC Science & Technology

    1999-09-01

    with UML. Addison Wesley. Deitel, H. and Deitel, P. 1994. C++ How to Program . Prentice Hall. Deitel, H. and Deitel, P. 1998. JAVA How ... to . Program . Prentice.Hall. Eckel, B. 1998. Thinking in JAVA. Prentice Hall. 141 Edwards, J. 1997. 3-Tier Client/Server At Work. John

  3. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  4. EarthServer: a Summary of Achievements in Technology, Services, and Standards

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2015-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data, according to ISO and OGC defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timese ries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The transatlantic EarthServer initiative, running from 2011 through 2014, has united 11 partners to establish Big Earth Data Analytics. A key ingredient has been flexibility for users to ask whatever they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level, standards-based query languages which unify data and metadata search in a simple, yet powerful way. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing cod e has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, the pioneer and leading Array DBMS built for any-size multi-dimensional raster data being extended with support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly

  5. Studying the co-evolution of protein families with the Mirrortree web server.

    PubMed

    Ochoa, David; Pazos, Florencio

    2010-05-15

    The Mirrortree server allows to graphically and interactively study the co-evolution of two protein families, and investigate their possible interactions and functional relationships in a taxonomic context. The server includes the possibility of starting from single sequences and hence it can be used by non-expert users. The web server is freely available at http://csbg.cnb.csic.es/mtserver. It was tested in the main web browsers. Adobe Flash Player is required at the client side to perform the interactive assessment of co-evolution. pazos@cnb.csic.es Supplementary data are available at Bioinformatics online.

  6. An Array Library for Microsoft SQL Server with Astrophysical Applications

    NASA Astrophysics Data System (ADS)

    Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.

    2012-09-01

    Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory

  7. Thin Client Architecture: The Promise and the Problems.

    ERIC Educational Resources Information Center

    Machovec, George S.

    1997-01-01

    Describes thin clients, a networking technology that allows organizations to provide software applications over networked workstations connected to a central server. Topics include corporate settings; major advantages, including cost effectiveness and increased computer security; problems; and possible applications for large public and academic…

  8. Lsiviewer 2.0 - a Client-Oriented Online Visualization Tool for Geospatial Vector Data

    NASA Astrophysics Data System (ADS)

    Manikanta, K.; Rajan, K. S.

    2017-09-01

    Geospatial data visualization systems have been predominantly through applications that are installed and run in a desktop environment. Over the last decade, with the advent of web technologies and its adoption by Geospatial community, the server-client model for data handling, data rendering and visualization respectively has been the most prevalent approach in Web-GIS. While the client devices have become functionally more powerful over the recent years, the above model has largely ignored it and is still in a mode of serverdominant computing paradigm. In this paper, an attempt has been made to develop and demonstrate LSIViewer - a simple, easy-to-use and robust online geospatial data visualisation system for the user's own data that harness the client's capabilities for data rendering and user-interactive styling, with a reduced load on the server. The developed system can support multiple geospatial vector formats and can be integrated with other web-based systems like WMS, WFS, etc. The technology stack used to build this system is Node.js on the server side and HTML5 Canvas and JavaScript on the client side. Various tests run on a range of vector datasets, upto 35 MB, showed that the time taken to render the vector data using LSIViewer is comparable to a desktop GIS application, QGIS, over an identical system.

  9. The Common Gateway Interface (CGI) for Enhancing Access to Database Servers via the World Wide Web (WWW).

    ERIC Educational Resources Information Center

    Machovec, George S., Ed.

    1995-01-01

    Explains the Common Gateway Interface (CGI) protocol as a set of rules for passing information from a Web server to an external program such as a database search engine. Topics include advantages over traditional client/server solutions, limitations, sample library applications, and sources of information from the Internet. (LRW)

  10. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    NASA Astrophysics Data System (ADS)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on

  11. Implementation of a real-time multi-channel gateway server in ubiquitous integrated biotelemetry system for emergency care (UIBSEC).

    PubMed

    Cheon, Gyeongwoo; Shin, Il Hyung; Jung, Min Yang; Kim, Hee Chan

    2009-01-01

    We developed a gateway server to support various types of bio-signal monitoring devices for ubiquitous emergency healthcare in a reliable, effective, and scalable way. The server provides multiple channels supporting real-time N-to-N client connections. We applied our system to four types of health monitoring devices including a 12-channel electrocardiograph (ECG), oxygen saturation (SpO(2)), and medical imaging devices (a ultrasonograph and a digital skin microscope). Different types of telecommunication networks were tested: WIBRO, CDMA, wireless LAN, and wired internet. We measured the performance of our system in terms of the transmission rate and the number of simultaneous connections. The results show that the proposed network communication strategy can be successfully applied to the ubiquitous emergency healthcare service by providing a fast rate enough for real-time video transmission and multiple connections among patients and medical personnel.

  12. MADGE: scalable distributed data management software for cDNA microarrays.

    PubMed

    McIndoe, Richard A; Lanzen, Aaron; Hurtz, Kimberly

    2003-01-01

    The human genome project and the development of new high-throughput technologies have created unparalleled opportunities to study the mechanism of diseases, monitor the disease progression and evaluate effective therapies. Gene expression profiling is a critical tool to accomplish these goals. The use of nucleic acid microarrays to assess the gene expression of thousands of genes simultaneously has seen phenomenal growth over the past five years. Although commercial sources of microarrays exist, investigators wanting more flexibility in the genes represented on the array will turn to in-house production. The creation and use of cDNA microarrays is a complicated process that generates an enormous amount of information. Effective data management of this information is essential to efficiently access, analyze, troubleshoot and evaluate the microarray experiments. We have developed a distributable software package designed to track and store the various pieces of data generated by a cDNA microarray facility. This includes the clone collection storage data, annotation data, workflow queues, microarray data, data repositories, sample submission information, and project/investigator information. This application was designed using a 3-tier client server model. The data access layer (1st tier) contains the relational database system tuned to support a large number of transactions. The data services layer (2nd tier) is a distributed COM server with full database transaction support. The application layer (3rd tier) is an internet based user interface that contains both client and server side code for dynamic interactions with the user. This software is freely available to academic institutions and non-profit organizations at http://www.genomics.mcg.edu/niddkbtc.

  13. A Public-Key Based Authentication and Key Establishment Protocol Coupled with a Client Puzzle.

    ERIC Educational Resources Information Center

    Lee, M. C.; Fung, Chun-Kan

    2003-01-01

    Discusses network denial-of-service attacks which have become a security threat to the Internet community and suggests the need for reliable authentication protocols in client-server applications. Presents a public-key based authentication and key establishment protocol coupled with a client puzzle protocol and validates it through formal logic…

  14. Acute tier-1 and tier-2 effect assessment approaches in the EFSA Aquatic Guidance Document: are they sufficiently protective for insecticides?

    PubMed

    van Wijngaarden, René P A; Maltby, Lorraine; Brock, Theo C M

    2015-08-01

    The objective of this paper is to evaluate whether the acute tier-1 and tier-2 methods as proposed by the Aquatic Guidance Document recently published by the European Food Safety Authority (EFSA) are appropriate for deriving regulatory acceptable concentrations (RACs) for insecticides. The tier-1 and tier-2 RACs were compared with RACs based on threshold concentrations from micro/mesocosm studies (ETO-RAC). A lower-tier RAC was considered as sufficiently protective, if less than the corresponding ETO-RAC. ETO-RACs were calculated for repeated (n = 13) and/or single pulsed applications (n = 17) of 26 insecticides to micro/mesocosms, giving a maximum of 30 insecticide × application combinations (i.e. cases) for comparison. Acute tier-1 RACs (for 24 insecticides) were lower than the corresponding ETO-RACs in 27 out of 29 cases, while tier-2 Geom-RACs (for 23 insecticides) were lower in 24 out of 26 cases. The tier-2 SSD-RAC (for 21 insecticides) using HC5 /3 was lower than the ETO-RAC in 23 out of 27 cases, whereas the tier-2 SSD-RAC using HC5 /6 was protective in 25 out of 27 cases. The tier-1 and tier-2 approaches proposed by EFSA for acute effect assessment are sufficiently protective for the majority of insecticides evaluated. Further evaluation may be needed for insecticides with more novel chemistries (neonicotinoids, biopesticides) and compounds that show delayed effects (insect growth regulators). © 2014 Society of Chemical Industry.

  15. Web-client based distributed generalization and geoprocessing

    USGS Publications Warehouse

    Wolf, E.B.; Howe, K.

    2009-01-01

    Generalization and geoprocessing operations on geospatial information were once the domain of complex software running on high-performance workstations. Currently, these computationally intensive processes are the domain of desktop applications. Recent efforts have been made to move geoprocessing operations server-side in a distributed, web accessible environment. This paper initiates research into portable client-side generalization and geoprocessing operations as part of a larger effort in user-centered design for the US Geological Survey's The National Map. An implementation of the Ramer-Douglas-Peucker (RDP) line simplification algorithm was created in the open source OpenLayers geoweb client. This algorithm implementation was benchmarked using differing data structures and browser platforms. The implementation and results of the benchmarks are discussed in the general context of client-side geoprocessing. (Abstract).

  16. CCTOP: a Consensus Constrained TOPology prediction web server.

    PubMed

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. The HydroServer Platform for Sharing Hydrologic Data

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its

  18. ERDDAP - An Easier Way for Diverse Clients to Access Scientific Data From Diverse Sources

    NASA Astrophysics Data System (ADS)

    Mendelssohn, R.; Simons, R. A.

    2008-12-01

    ERDDAP is a new open-source, web-based service that aggregates data from other web services: OPeNDAP grid servers (THREDDS), OPeNDAP sequence servers (Dapper), NOS SOAP service, SOS (IOOS, OOStethys), microWFS, DiGIR (OBIS, BMDE). Regardless of the data source, ERDDAP makes all datasets available to clients via standard (and enhanced) DAP requests and makes some datasets accessible via WMS. A client's request also specifies the desired format for the results, e.g., .asc, .csv, .das, .dds, .dods, htmlTable, XHTML, .mat, netCDF, .kml, .png, or .pdf (formats more directly useful to clients). ERDDAP interprets a client request, requests the data from the data source (in the appropriate way), reformats the data source's response, and sends the result to the client. Thus ERDDAP makes data from diverse sources available to diverse clients via standardized interfaces. Clients don't have to install libraries to get data from ERDDAP because ERDDAP is RESTful and resource-oriented: a URL completely defines a data request and the URL can be used in any application that can send a URL and receive a file. This also makes it easy to use ERDDAP in mashups with other web services. ERDDAP could be extended to support other protocols. ERDDAP's hub and spoke architecture simplifies adding support for new types of data sources and new types of clients. ERDDAP includes metadata management support, catalog services, and services to make graphs and maps.

  19. GenExp: an interactive web-based genomic DAS client with client-side data rendering.

    PubMed

    Gel Moreno, Bernat; Messeguer Peypoch, Xavier

    2011-01-01

    The Distributed Annotation System (DAS) offers a standard protocol for sharing and integrating annotations on biological sequences. There are more than 1000 DAS sources available and the number is steadily increasing. Clients are an essential part of the DAS system and integrate data from several independent sources in order to create a useful representation to the user. While web-based DAS clients exist, most of them do not have direct interaction capabilities such as dragging and zooming with the mouse. Here we present GenExp, a web based and fully interactive visual DAS client. GenExp is a genome oriented DAS client capable of creating informative representations of genomic data zooming out from base level to complete chromosomes. It proposes a novel approach to genomic data rendering and uses the latest HTML5 web technologies to create the data representation inside the client browser. Thanks to client-side rendering most position changes do not need a network request to the server and so responses to zooming and panning are almost immediate. In GenExp it is possible to explore the genome intuitively moving it with the mouse just like geographical map applications. Additionally, in GenExp it is possible to have more than one data viewer at the same time and to save the current state of the application to revisit it later on. GenExp is a new interactive web-based client for DAS and addresses some of the short-comings of the existing clients. It uses client-side data rendering techniques resulting in easier genome browsing and exploration. GenExp is open source under the GPL license and it is freely available at http://gralggen.lsi.upc.edu/recerca/genexp.

  20. GenExp: An Interactive Web-Based Genomic DAS Client with Client-Side Data Rendering

    PubMed Central

    Gel Moreno, Bernat; Messeguer Peypoch, Xavier

    2011-01-01

    Background The Distributed Annotation System (DAS) offers a standard protocol for sharing and integrating annotations on biological sequences. There are more than 1000 DAS sources available and the number is steadily increasing. Clients are an essential part of the DAS system and integrate data from several independent sources in order to create a useful representation to the user. While web-based DAS clients exist, most of them do not have direct interaction capabilities such as dragging and zooming with the mouse. Results Here we present GenExp, a web based and fully interactive visual DAS client. GenExp is a genome oriented DAS client capable of creating informative representations of genomic data zooming out from base level to complete chromosomes. It proposes a novel approach to genomic data rendering and uses the latest HTML5 web technologies to create the data representation inside the client browser. Thanks to client-side rendering most position changes do not need a network request to the server and so responses to zooming and panning are almost immediate. In GenExp it is possible to explore the genome intuitively moving it with the mouse just like geographical map applications. Additionally, in GenExp it is possible to have more than one data viewer at the same time and to save the current state of the application to revisit it later on. Conclusions GenExp is a new interactive web-based client for DAS and addresses some of the short-comings of the existing clients. It uses client-side data rendering techniques resulting in easier genome browsing and exploration. GenExp is open source under the GPL license and it is freely available at http://gralggen.lsi.upc.edu/recerca/genexp. PMID:21750706

  1. Testing an Open Source installation and server provisioning tool for the INFN CNAF Tierl Storage system

    NASA Astrophysics Data System (ADS)

    Pezzi, M.; Favaro, M.; Gregori, D.; Ricci, P. P.; Sapunenko, V.

    2014-06-01

    In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.

  2. The N-policy for an unreliable server with delaying repair and two phases of service

    NASA Astrophysics Data System (ADS)

    Choudhury, Gautam; Ke, Jau-Chuan; Tadj, Lotfi

    2009-09-01

    This paper deals with an MX/G/1 with an additional second phase of optional service and unreliable server, which consist of a breakdown period and a delay period under N-policy. While the server is working with any phase of service, it may break down at any instant and the service channel will fail for a short interval of time. Further concept of the delay time is also introduced. If no customer arrives during the breakdown period, the server becomes idle in the system until the queue size builds up to a threshold value . As soon as the queue size becomes at least N, the server immediately begins to serve the first phase of regular service to all the waiting customers. After the completion of which, only some of them receive the second phase of the optional service. We derive the queue size distribution at a random epoch and departure epoch as well as various system performance measures. Finally we derive a simple procedure to obtain optimal stationary policy under a suitable linear cost structure.

  3. DelPhi web server v2: incorporating atomic-style geometrical figures into the computational protocol.

    PubMed

    Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil

    2012-06-15

    A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.

  4. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    NASA Astrophysics Data System (ADS)

    Brun, R.; Duellmann, D.; Ganis, G.; Hanushevsky, A.; Janyst, L.; Peters, A. J.; Rademakers, F.; Sindrilaru, E.

    2011-12-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  5. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, R.; Dullmann, D.; Ganis, G.

    2012-04-19

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyze the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with amore » discussion of the potential role of this new component at the different tiers of a distributed computing grid.« less

  6. Adaptive proxy map server for efficient vector spatial data rendering

    NASA Astrophysics Data System (ADS)

    Sayar, Ahmet

    2013-01-01

    The rapid transmission of vector map data over the Internet is becoming a bottleneck of spatial data delivery and visualization in web-based environment because of increasing data amount and limited network bandwidth. In order to improve both the transmission and rendering performances of vector spatial data over the Internet, we propose a proxy map server enabling parallel vector data fetching as well as caching to improve the performance of web-based map servers in a dynamic environment. Proxy map server is placed seamlessly anywhere between the client and the final services, intercepting users' requests. It employs an efficient parallelization technique based on spatial proximity and data density in case distributed replica exists for the same spatial data. The effectiveness of the proposed technique is proved at the end of the article by the application of creating map images enriched with earthquake seismic data records.

  7. Framework for a clinical information system.

    PubMed

    Van de Velde, R

    2000-01-01

    The current status of our work towards the design and implementation of a reference architecture for a Clinical Information System is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the 'middle' tier apply the clinical (business) model and application rules to communicate with so-called 'thin client' workstations. The main characteristics are the focus on modelling and reuse of both data and business logic as there is a shift away from data and functional modelling towards object modelling. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  8. On implementation of DCTCP on three-tier and fat-tree data center network topologies.

    PubMed

    Zafar, Saima; Bashir, Abeer; Chaudhry, Shafique Ahmad

    2016-01-01

    A data center is a facility for housing computational and storage systems interconnected through a communication network called data center network (DCN). Due to a tremendous growth in the computational power, storage capacity and the number of inter-connected servers, the DCN faces challenges concerning efficiency, reliability and scalability. Although transmission control protocol (TCP) is a time-tested transport protocol in the Internet, DCN challenges such as inadequate buffer space in switches and bandwidth limitations have prompted the researchers to propose techniques to improve TCP performance or design new transport protocols for DCN. Data center TCP (DCTCP) emerge as one of the most promising solutions in this domain which employs the explicit congestion notification feature of TCP to enhance the TCP congestion control algorithm. While DCTCP has been analyzed for two-tier tree-based DCN topology for traffic between servers in the same rack which is common in cloud applications, it remains oblivious to the traffic patterns common in university and private enterprise networks which traverse the complete network interconnect spanning upper tier layers. We also recognize that DCTCP performance cannot remain unaffected by the underlying DCN architecture hence there is a need to test and compare DCTCP performance when implemented over diverse DCN architectures. Some of the most notable DCN architectures are the legacy three-tier, fat-tree, BCube, DCell, VL2, and CamCube. In this research, we simulate the two switch-centric DCN architectures; the widely deployed legacy three-tier architecture and the promising fat-tree architecture using network simulator and analyze the performance of DCTCP in terms of throughput and delay for realistic traffic patterns. We also examine how DCTCP prevents incast and outcast congestion when realistic DCN traffic patterns are employed in above mentioned topologies. Our results show that the underlying DCN architecture

  9. Solid waste information and tracking system server conversion project management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MAY, D.L.

    1999-04-12

    The Project Management Plan governing the conversion of Solid Waste Information and Tracking System (SWITS) to a client-server architecture. The Solid Waste Information and Tracking System Project Management Plan (PMP) describes the background, planning and management of the SWITS conversion. Requirements and specification documentation needed for the SWITS conversion will be released as supporting documents.

  10. Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco

    2014-05-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.

  11. DelPhiPKa web server: predicting pKa of proteins, RNAs and DNAs.

    PubMed

    Wang, Lin; Zhang, Min; Alexov, Emil

    2016-02-15

    A new pKa prediction web server is released, which implements DelPhi Gaussian dielectric function to calculate electrostatic potentials generated by charges of biomolecules. Topology parameters are extended to include atomic information of nucleotides of RNA and DNA, which extends the capability of pKa calculations beyond proteins. The web server allows the end-user to protonate the biomolecule at particular pH based on calculated pKa values and provides the downloadable file in PQR format. Several tests are performed to benchmark the accuracy and speed of the protocol. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhiPKa program. The computation is performed on the Palmetto supercomputer cluster and results/download links are given back to the end-user via http protocol. The web server takes advantage of MPI parallel implementation in DelPhiPKa and can run a single job on up to 24 CPUs. The DelPhiPKa web server is available at http://compbio.clemson.edu/pka_webserver. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. CTserver: A Computational Thermodynamics Server for the Geoscience Community

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2006-12-01

    The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed

  13. "MedTRIS" (Medical Triage and Registration Informatics System): A Web-based Client Server System for the Registration of Patients Being Treated in First Aid Posts at Public Events and Mass Gatherings.

    PubMed

    Gogaert, Stefan; Vande Veegaete, Axel; Scholliers, Annelies; Vandekerckhove, Philippe

    2016-10-01

    First aid (FA) services are provisioned on-site as a preventive measure at most public events. In Flanders, Belgium, the Belgian Red Cross-Flanders (BRCF) is the major provider of these FA services with volunteers being deployed at approximately 10,000 public events annually. The BRCF has systematically registered information on the patients being treated in FA posts at major events and mass gatherings during the last 10 years. This information has been collected in a web-based client server system called "MedTRIS" (Medical Triage and Registration Informatics System). MedTRIS contains data on more than 200,000 patients at 335 mass events. This report describes the MedTRIS architecture, the data collected, and how the system operates in the field. This database consolidates different types of information with regards to FA interventions in a standardized way for a variety of public events. MedTRIS allows close monitoring in "real time" of the situation at mass gatherings and immediate intervention, when necessary; allows more accurate prediction of resources needed; allows to validate conceptual and predictive models for medical resources at (mass) public events; and can contribute to the definition of a standardized minimum data set (MDS) for mass-gathering health research and evaluation. Gogaert S , Vande veegaete A , Scholliers A , Vandekerckhove P . "MedTRIS" (Medical Triage and Registration Informatics System): a web-based client server system for the registration of patients being treated in first aid posts at public events and mass gatherings. Prehosp Disaster Med. 2016;31(5):557-562.

  14. An innovative middle tier design for protecting federal privacy act data

    NASA Astrophysics Data System (ADS)

    Allen, Thomas G. L.

    2008-03-01

    This paper identifies an innovative middle tier technique and design that provides a solid layer of network security for a single source of human resources (HR) data that falls under the Federal Privacy Act. The paper also discusses functionality for both retrieving data and updating data in a secure way. It will be shown that access to this information is limited by a security mechanism that authorizes all connections based on both application (client) and user information.

  15. Design of Accelerator Online Simulator Server Using Structured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Guobao; /Brookhaven; Chu, Chungming

    2012-07-06

    Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describesmore » the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.« less

  16. Ensuring Safety of Navigation: A Three-Tiered Approach

    NASA Astrophysics Data System (ADS)

    Johnson, S. D.; Thompson, M.; Brazier, D.

    2014-12-01

    The primary responsibility of the Hydrographic Department at the Naval Oceanographic Office (NAVOCEANO) is to support US Navy surface and sub-surface Safety of Navigation (SoN) requirements. These requirements are interpreted, surveys are conducted, and accurate products are compiled and archived for future exploitation. For a number of years NAVOCEANO has employed a two-tiered data-basing structure to support SoN. The first tier (Data Warehouse, or DWH) provides access to the full-resolution sonar and lidar data. DWH preserves the original data such that any scale product can be built. The second tier (Digital Bathymetric Database - Variable resolution, or DBDB-V) served as the final archive for SoN chart scale, gridded products compiled from source bathymetry. DBDB-V has been incorporated into numerous DoD tactical decision aids and serves as the foundation bathymetry for ocean modeling. With the evolution of higher density survey systems and the addition of high-resolution gridded bathymetry product requirements, a two-tiered model did not provide an efficient solution for SoN. The two-tiered approach required scientists to exploit full-resolution data in order to build any higher resolution product. A new perspective on the archival and exploitation of source data was required. This new perspective has taken the form of a third tier, the Navigation Surface Database (NSDB). NSDB is an SQLite relational database populated with International Hydrographic Organization (IHO), S-102 compliant Bathymetric Attributed Grids (BAGs). BAGs archived within NSDB are developed at the highest resolution that the collection sensor system can support and contain nodal estimates for depth, uncertainty, separation values and metadata. Gridded surface analysis efforts culminate in the generation of the source resolution BAG files and their storage within NSDB. Exploitation of these resources eliminates the time and effort needed to re-grid and re-analyze native source file formats.

  17. High-Performance Tiled WMS and KML Web Server

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  18. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    NASA Astrophysics Data System (ADS)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  19. Tier Two Interventions Implemented within the Context of a Tiered Prevention Framework

    ERIC Educational Resources Information Center

    Mitchell, Barbara S.; Stormont, Melissa; Gage, Nicholas A.

    2011-01-01

    Despite a growing body of evidence demonstrating the value of Tier 1 and Tier 3 interventions, significantly less is known about Tier 2 level treatments when they are added within the context of a tiered continuum of support. The purpose of this article is to systematically review the existing research base for Tier 2 small group intervention…

  20. Using NetCloak to develop server-side Web-based experiments without writing CGI programs.

    PubMed

    Wolfe, Christopher R; Reyna, Valerie F

    2002-05-01

    Server-side experiments use the Web server, rather than the participant's browser, to handle tasks such as random assignment, eliminating inconsistencies with JAVA and other client-side applications. Heretofore, experimenters wishing to create server-side experiments have had to write programs to create common gateway interface (CGI) scripts in programming languages such as Perl and C++. NetCloak uses simple, HTML-like commands to create CGIs. We used NetCloak to implement an experiment on probability estimation. Measurements of time on task and participants' IP addresses assisted quality control. Without prior training, in less than 1 month, we were able to use NetCloak to design and create a Web-based experiment and to help graduate students create three Web-based experiments of their own.

  1. Contractor-Client Communications Checklist for Spray Polyurethane Foam (SPF), Incluyendo la Versión de Español

    EPA Pesticide Factsheets

    This checklist provides professional contractors and clients topics to discuss so that the client understands what to expect when a professional contractor installs SPF insulation. Lista de verificación de comunicación del contratista y el cliente.

  2. The EarthServer Geology Service: web coverage services for geosciences

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2014-05-01

    The EarthServer FP7 project is implementing web coverage services using the OGC WCS and WCPS standards for a range of earth science domains: cryospheric; atmospheric; oceanographic; planetary; and geological. BGS is providing the geological service (http://earthserver.bgs.ac.uk/). Geoscience has used remote sensed data from satellites and planes for some considerable time, but other areas of geosciences are less familiar with the use of coverage data. This is rapidly changing with the development of new sensor networks and the move from geological maps to geological spatial models. The BGS geology service is designed initially to address two coverage data use cases and three levels of data access restriction. Databases of remote sensed data are typically very large and commonly held offline, making it time-consuming for users to assess and then download data. The service is designed to allow the spatial selection, editing and display of Landsat and aerial photographic imagery, including band selection and contrast stretching. This enables users to rapidly view data, assess is usefulness for their purposes, and then enhance and download it if it is suitable. At present the service contains six band Landsat 7 (Blue, Green, Red, NIR 1, NIR 2, MIR) and three band false colour aerial photography (NIR, green, blue), totalling around 1Tb. Increasingly 3D spatial models are being produced in place of traditional geological maps. Models make explicit spatial information implicit on maps and thus are seen as a better way of delivering geosciences information to non-geoscientists. However web delivery of models, including the provision of suitable visualisation clients, has proved more challenging than delivering maps. The EarthServer geology service is delivering 35 surfaces as coverages, comprising the modelled superficial deposits of the Glasgow area. These can be viewed using a 3D web client developed in the EarthServer project by Fraunhofer. As well as remote sensed

  3. Assessment of the concordance among 2-tier, 3-tier, and 5-tier fetal heart rate classification systems.

    PubMed

    Gyamfi Bannerman, Cynthia; Grobman, William A; Antoniewicz, Leah; Hutchinson, Maria; Blackwell, Sean

    2011-09-01

    In 2008, a National Institute of Child Health and Human Development/Society for Maternal-Fetal Medicine-sponsored workshop on electronic fetal monitoring recommended a new fetal heart tracing interpretation system. Comparison of this 3-tier system with other systems is lacking. Our purpose was to determine the relationships between fetal heart rate categories for the 3 existing systems. Three Maternal-Fetal Medicine specialists reviewed 120 fetal heart rates. All tracings were from term, singleton pregnancies with known umbilical artery pH. The fetal heart rates were classified by a 2-tier, 3-tier, and 5-tier system. Each Maternal-Fetal Medicine examiner reviewed 120 fetal heart rate segments. When compared with the 2-tier system, 0%, 54%, and 100% tracings in categories 1, 2, and 3 were "nonreassuring." There was strong concordance between category 1 and "green" as well as category 3 and "red" tracings. The 3-tier and 5-tier systems were similar in fetal heart rate interpretations for tracings that were either very normal or very abnormal. Whether one system is superior to the others in predicting fetal acidemia remains unknown. Copyright © 2011 Mosby, Inc. All rights reserved.

  4. Prototype of Multifunctional Full-text Library in the Architecture Web-browser / Web-server / SQL-server

    NASA Astrophysics Data System (ADS)

    Lyapin, Sergey; Kukovyakin, Alexey

    Within the framework of the research program "Textaurus" an operational prototype of multifunctional library T-Libra v.4.1. has been created which makes it possible to carry out flexible parametrizable search within a full-text database. The information system is realized in the architecture Web-browser / Web-server / SQL-server. This allows to achieve an optimal combination of universality and efficiency of text processing, on the one hand, and convenience and minimization of expenses for an end user (due to applying of a standard Web-browser as a client application), on the other one. The following principles underlie the information system: a) multifunctionality, b) intelligence, c) multilingual primary texts and full-text searching, d) development of digital library (DL) by a user ("administrative client"), e) multi-platform working. A "library of concepts", i.e. a block of functional models of semantic (concept-oriented) searching, as well as a subsystem of parametrizable queries to a full-text database, which is closely connected with the "library", serve as a conceptual basis of multifunctionality and "intelligence" of the DL T-Libra v.4.1. An author's paragraph is a unit of full-text searching in the suggested technology. At that, the "logic" of an educational / scientific topic or a problem can be built in a multilevel flexible structure of a query and the "library of concepts", replenishable by the developers and experts. About 10 queries of various level of complexity and conceptuality are realized in the suggested version of the information system: from simple terminological searching (taking into account lexical and grammatical paradigms of Russian) to several kinds of explication of terminological fields and adjustable two-parameter thematic searching (a [set of terms] and a [distance between terms] within the limits of an author's paragraph are such parameters correspondingly).

  5. A client–server framework for 3D remote visualization of radiotherapy treatment space

    PubMed Central

    Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.

    2013-01-01

    Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605

  6. Online characterization of planetary surfaces: PlanetServer, an open-source analysis and visualization tool

    NASA Astrophysics Data System (ADS)

    Marco Figuera, R.; Pham Huu, B.; Rossi, A. P.; Minin, M.; Flahaut, J.; Halder, A.

    2018-01-01

    The lack of open-source tools for hyperspectral data visualization and analysis creates a demand for new tools. In this paper we present the new PlanetServer, a set of tools comprising a web Geographic Information System (GIS) and a recently developed Python Application Programming Interface (API) capable of visualizing and analyzing a wide variety of hyperspectral data from different planetary bodies. Current WebGIS open-source tools are evaluated in order to give an overview and contextualize how PlanetServer can help in this matters. The web client is thoroughly described as well as the datasets available in PlanetServer. Also, the Python API is described and exposed the reason of its development. Two different examples of mineral characterization of different hydrosilicates such as chlorites, prehnites and kaolinites in the Nili Fossae area on Mars are presented. As the obtained results show positive outcome in hyperspectral analysis and visualization compared to previous literature, we suggest using the PlanetServer approach for such investigations.

  7. CAG - computer-aid-georeferencing, or rapid sharing, restructuring and presentation of environmental data using remote-server georeferencing for the GE clients. Educational and scientific implications.

    NASA Astrophysics Data System (ADS)

    Hronusov, V. V.

    2006-12-01

    We suggest a method of using external public servers for rearranging, restructuring and rapid sharing of environmental data for the purpose of quick presentations in numerous GE clients. The method allows to add new philosophy for the presentation (publication) of the data (mostly static) stored in the public domain (e.g., Blue Marble, Visible Earth, etc). - The new approach is generated by publishing freely accessible spreadsheets which contain enough information and links to the data. Due to the fact that most of the large depositories of the data on the environmental monitoring have rather simple net address system as well as simple hierarchy mostly based on the date and type of the data, it is possible to develop the http-based link to the file which contains the data. Publication of new data on the server is recorded by a simple entering a new address into a cell in the spreadsheet. At the moment we use the EditGrid (www.editgrid.com) system as a spreadsheet platform. The generation of kml-codes is achieved on the basis of XML data and XSLT procedures. Since the EditGride environment supports "fetch" and similar commands, it is possible to create"smart-adaptive" KML generation on the fly based on the data streams from RSS and XML sources. The previous GIS-based methods could combine hi-definition data combined from various sources, but large- scale comparisons of dynamic processes have been usually out of reach of the technology. The suggested method allows unlimited number of GE clients to view, review and compare dynamic and static process of previously un-combinable sources, and on unprecedent scales. The ease of automated or computer-assisted georeferencing has already led to translation about 3000 raster public domain imagery, point and linear data sources into GE-language. In addition the suggested method allows a user to create rapid animations to demonstrate dynamic processes; roducts of high demand in education, meteorology, volcanology and

  8. How to securely replicate services (preliminary version)

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth

    1992-01-01

    A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients being corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by 'n' servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter, k, at least k servers are correct and fewer than k servers are correct. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires that fewer than k servers are corrupt and, to ensure liveness, that k is less than or = n - 2t, where t is the assumed maximum total number of both corruptions and benign failures suffered by servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service.

  9. A Two-Tier Multiple Choice Questions to Diagnose Thermodynamic Misconception of Thai and Laos Students

    NASA Astrophysics Data System (ADS)

    Kamcharean, Chanwit; Wattanakasiwich, Pornrat

    The objective of this study was to diagnose misconceptions of Thai and Lao students in thermodynamics by using a two-tier multiple-choice test. Two-tier multiple choice questions consist of the first tier, a content-based question and the second tier, a reasoning-based question. Data of student understanding was collected by using 10 two-tier multiple-choice questions. Thai participants were the first-year students (N = 57) taking a fundamental physics course at Chiang Mai University in 2012. Lao participants were high school students in Grade 11 (N = 57) and Grade 12 (N = 83) at Muengnern high school in Xayaboury province, Lao PDR. As results, most students answered content-tier questions correctly but chose incorrect answers for reason-tier questions. When further investigating their incorrect reasons, we found similar misconceptions as reported in previous studies such as incorrectly relating pressure with temperature when presenting with multiple variables.

  10. Implementation of system intelligence in a 3-tier telemedicine/PACS hierarchical storage management system

    NASA Astrophysics Data System (ADS)

    Chao, Woodrew; Ho, Bruce K. T.; Chao, John T.; Sadri, Reza M.; Huang, Lu J.; Taira, Ricky K.

    1995-05-01

    Our tele-medicine/PACS archive system is based on a three-tier distributed hierarchical architecture, including magnetic disk farms, optical jukebox, and tape jukebox sub-systems. The hierarchical storage management (HSM) architecture, built around a low cost high performance platform [personal computers (PC) and Microsoft Windows NT], presents a very scaleable and distributed solution ideal for meeting the needs of client/server environments such as tele-medicine, tele-radiology, and PACS. These image based systems typically require storage capacities mirroring those of film based technology (multi-terabyte with 10+ years storage) and patient data retrieval times at near on-line performance as demanded by radiologists. With the scaleable architecture, storage requirements can be easily configured to meet the needs of the small clinic (multi-gigabyte) to those of a major hospital (multi-terabyte). The patient data retrieval performance requirement was achieved by employing system intelligence to manage migration and caching of archived data. Relevant information from HIS/RIS triggers prefetching of data whenever possible based on simple rules. System intelligence embedded in the migration manger allows the clustering of patient data onto a single tape during data migration from optical to tape medium. Clustering of patient data on the same tape eliminates multiple tape loading and associated seek time during patient data retrieval. Optimal tape performance can then be achieved by utilizing the tape drives high performance data streaming capabilities thereby reducing typical data retrieval delays associated with streaming tape devices.

  11. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    NASA Astrophysics Data System (ADS)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  12. Tier identification (TID) for tiered memory characteristics

    DOEpatents

    Chang, Jichuan; Lim, Kevin T; Ranganathan, Parthasarathy

    2014-03-25

    A tier identification (TID) is to indicate a characteristic of a memory region associated with a virtual address in a tiered memory system. A thread may be serviced according to a first path based on the TID indicating a first characteristic. The thread may be serviced according to a second path based on the TID indicating a second characteristic.

  13. Interfacing a high performance disk array file server to a Gigabit LAN

    NASA Technical Reports Server (NTRS)

    Seshan, Srinivasan; Katz, Randy H.

    1993-01-01

    Our previous prototype, RAID-1, identified several bottlenecks in typical file server architectures. The most important bottleneck was the lack of a high-bandwidth path between disk, memory, and the network. Workstation servers, such as the Sun-4/280, have very slow access to peripherals on busses far from the CPU. For the RAID-2 system, we addressed this problem by designing a crossbar interconnect, Xbus board, that provides a 40MB/s path between disk, memory, and the network interfaces. However, this interconnect does not provide the system CPU with low latency access to control the various interfaces. To provide a high data rate to clients on the network, we were forced to carefully and efficiently design the network software. A block diagram of the system hardware architecture is given. In the following subsections, we describe pieces of the RAID-2 file server hardware that had a significant impact on the design of the network interface.

  14. A Forecast Skill Comparison between CliPAS One-Tier and Two-Tier Hindcast Experiments

    NASA Astrophysics Data System (ADS)

    Lee, J.; Wang, B.; Kang, I.

    2006-05-01

    A 24-year (1981-2004) MME hindcast experimental dataset is produced under the "Climate Prediction and Its Application to Society" (CliPAS) project sponsored by Korean Meteorological Administration (KMA). This dataset consists of 5 one-tier model systems from National Aeronautics and Space Administration (NASA), National Center for Environmental Prediction (NCEP), Frontier Research Center for Global Change (FRCGC), Seoul National University (SNU), and University of Hawaii (UH) and 5 two-tier model systems from Florida State University (FSU), Geophysical Fluid Dynamic Lab (GFDL), SNU, and UH. Multi-model Ensemble (MME) Forecast skills of seasonal precipitation and atmospheric circulation are compared between CliPAS one-tier and two-tier hindcast experiments for seasonal mean precipitation and atmospheric circulation. For winter prediction, two-tier MME has a comparable skill to one-tier MME. However, it is demonstrated that in the Asian-Australian monsoon (A-AM) heavy precipitation regions, one-tier systems are superior to two-tier systems in summer season. The reason is that inclusion of the local warm pool- monsoon interaction in the one-tier system improves the ENSO teleconnection with monsoon regions. Both one-tier and two-tier MME fail to predict Indian monsoon circulation, while they have a significantly good skill for the broad scale monsoon circulation defined by Webster and Yang index. One-tier system has a much better skill to predict the monsoon circulation over the western North pacific where air-sea interaction plays an important role than two-tier system.

  15. Development of a 3D WebGIS System for Retrieving and Visualizing CityGML Data Based on their Geometric and Semantic Characteristics by Using Free and Open Source Technology

    NASA Astrophysics Data System (ADS)

    Pispidikis, I.; Dimopoulou, E.

    2016-10-01

    CityGML is considered as an optimal standard for representing 3D city models. However, international experience has shown that visualization of the latter is quite difficult to be implemented on the web, due to the large size of data and the complexity of CityGML. As a result, in the context of this paper, a 3D WebGIS application is developed in order to successfully retrieve and visualize CityGML data in accordance with their respective geometric and semantic characteristics. Furthermore, the available web technologies and the architecture of WebGIS systems are investigated, as provided by international experience, in order to be utilized in the most appropriate way for the purposes of this paper. Specifically, a PostgreSQL/ PostGIS Database is used, in compliance with the 3DCityDB schema. At Server tier, Apache HTTP Server and GeoServer are utilized, while a Server Side programming language PHP is used. At Client tier, which implemented the interface of the application, the following technologies were used: JQuery, AJAX, JavaScript, HTML5, WebGL and Ol3-Cesium. Finally, it is worth mentioning that the application's primary objectives are a user-friendly interface and a fully open source development.

  16. Web Service for Positional Quality Assessment: the Wps Tier

    NASA Astrophysics Data System (ADS)

    Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.

    2015-08-01

    In the field of spatial data every day we have more and more information available, but we still have little or very little information about the quality of spatial data. We consider that the automation of the spatial data quality assessment is a true need for the geomatic sector, and that automation is possible by means of web processing services (WPS), and the application of specific assessment procedures. In this paper we propose and develop a WPS tier centered on the automation of the positional quality assessment. An experiment using the NSSDA positional accuracy method is presented. The experiment involves the uploading by the client of two datasets (reference and evaluation data). The processing is to determine homologous pairs of points (by distance) and calculate the value of positional accuracy under the NSSDA standard. The process generates a small report that is sent to the client. From our experiment, we reached some conclusions on the advantages and disadvantages of WPSs when applied to the automation of spatial data accuracy assessments.

  17. Tele-healthcare for diabetes management: A low cost automatic approach.

    PubMed

    Benaissa, M; Malik, B; Kanakis, A; Wright, N P

    2012-01-01

    In this paper, a telemedicine system for managing diabetic patients with better care is presented. The system is an end to end solution which relies on the integration of front end (patient unit) and backend web server. A key feature of the system developed is the very low cost automated approach. The front-end of the system is capable of reading glucose measurements from any glucose meter and sending them automatically via existing networks to the back-end server. The back-end is designed and developed using n-tier web client architecture based on model-view-controller design pattern using open source technology, a cost effective solution. The back-end helps the health-care provider with data analysis; data visualization and decision support, and allows them to send feedback and therapeutic advice to patients from anywhere using a browser enabled device. This system will be evaluated during the trials which will be conducted in collaboration with a local hospital in phased manner.

  18. Tank Information System (tis): a Case Study in Migrating Web Mapping Application from Flex to Dojo for Arcgis Server and then to Open Source

    NASA Astrophysics Data System (ADS)

    Pulsani, B. R.

    2017-11-01

    Tank Information System is a web application which provides comprehensive information about minor irrigation tanks of Telangana State. As part of the program, a web mapping application using Flex and ArcGIS server was developed to make the data available to the public. In course of time as Flex be-came outdated, a migration of the client interface to the latest JavaScript based technologies was carried out. Initially, the Flex based application was migrated to ArcGIS JavaScript API using Dojo Toolkit. Both the client applications used published services from ArcGIS server. To check the migration pattern from proprietary to open source, the JavaScript based ArcGIS application was later migrated to OpenLayers and Dojo Toolkit which used published service from GeoServer. The migration pattern noticed in the study especially emphasizes upon the use of Dojo Toolkit and PostgreSQL database for ArcGIS server so that migration to open source could be performed effortlessly. The current ap-plication provides a case in study which could assist organizations in migrating their proprietary based ArcGIS web applications to open source. Furthermore, the study reveals cost benefits of adopting open source against commercial software's.

  19. A Client/Server Architecture for Supporting Science Data Using EPICS Version 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalesio, Leo

    2015-04-21

    The Phase 1 grant that serves as a precursor to this proposal, prototyped complex storage techniques for high speed structured data that is being produced in accelerator diagnostics and beam line experiments. It demonstrates the technologies that can be used to archive and retrieve complex data structures and provide the performance required by our new accelerators, instrumentations, and detectors. Phase 2 is proposed to develop a high-performance platform for data acquisition and analysis to provide physicists and operators a better understanding of the beam dynamics. This proposal includes developing a platform for reading 109 MHz data at 10 KHz ratesmore » through a multicore front end processor, archiving the data to an archive repository that is then indexed for fast retrieval. The data is then retrieved from this data archive, integrated with the scalar data, to provide data sets to client applications for analysis, use in feedback, and to aid in identifying problem with the instrumentation, plant, beam steering, or model. This development is built on EPICS version 4 , which is being successfully deployed to implement physics applications. Through prior SBIR grants, EPICS version 4 has a solid communication protocol for middle layer services (PVAccess), structured data representation and methods for efficient transportation and access (PVData), an operational hierarchical record environment (JAVA IOC), and prototypes for standard structured data (Normative Types). This work was further developed through project funding to successfully deploy the first service based physics application environment with demonstrated services that provide arbitrary object views, save sets, model, lattice, and unit conversion. Thin client physics applications have been developed in Python that implement quad centering, orbit display, bump control, and slow orbit feedback. This service based architecture has provided a very modular and robust environment that enables

  20. Smart cloud system with image processing server in diagnosing brain diseases dedicated for hospitals with limited resources.

    PubMed

    Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny

    2017-01-01

    The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.

  1. PlanetServer: Innovative approaches for the online analysis of hyperspectral satellite data from Mars

    NASA Astrophysics Data System (ADS)

    Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.

    2014-06-01

    PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.

  2. EarthServer: Cross-Disciplinary Earth Science Through Data Cube Analytics

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Rossi, A. P.

    2016-12-01

    The unprecedented increase of imagery, in-situ measurements, and simulation data produced by Earth (and Planetary) Science observations missions bears a rich, yet not leveraged potential for getting insights from integrating such diverse datasets and transform scientific questions into actual queries to data, formulated in a standardized way.The intercontinental EarthServer [1] initiative is demonstrating new directions for flexible, scalable Earth Science services based on innovative NoSQL technology. Researchers from Europe, the US and Australia have teamed up to rigorously implement the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users (scientist, planners, decision makers) will always see just a few datacubes they can slice and dice.EarthServer has established client [2] and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman [3,4], enables direct interaction, including 3-D visualization, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS). Conversely, EarthServer has shaped and advanced WCS based on the experience gained. The first phase of EarthServer has advanced scalable array database technology into 150+ TB services. Currently, Petabyte datacubes are being built for ad-hoc and cross-disciplinary querying, e.g. using climate, Earth observation and ocean data.We will present the EarthServer approach, its impact on OGC / ISO / INSPIRE standardization, and its platform technology, rasdaman.References: [1] Baumann, et al. (2015) DOI: 10.1080/17538947.2014.1003106 [2] Hogan, P., (2011) NASA World Wind, Proceedings of the 2nd International Conference on Computing for Geospatial Research

  3. FirebrowseR: an R client to the Broad Institute's Firehose Pipeline.

    PubMed

    Deng, Mario; Brägelmann, Johannes; Kryukov, Ivan; Saraiva-Agostinho, Nuno; Perner, Sven

    2017-01-01

    With its Firebrowse service (http://firebrowse.org/) the Broad Institute is making large-scale multi-platform omics data analysis results publicly available through a Representational State Transfer (REST) Application Programmable Interface (API). Querying this database through an API client from an arbitrary programming environment is an essential task, allowing other developers and researchers to focus on their analysis and avoid data wrangling. Hence, as a first result, we developed a workflow to automatically generate, test and deploy such clients for rapid response to API changes. Its underlying infrastructure, a combination of free and publicly available web services, facilitates the development of API clients. It decouples changes in server software from the client software by reacting to changes in the RESTful service and removing direct dependencies on a specific implementation of an API. As a second result, FirebrowseR, an R client to the Broad Institute's RESTful Firehose Pipeline, is provided as a working example, which is built by the means of the presented workflow. The package's features are demonstrated by an example analysis of cancer gene expression data.Database URL: https://github.com/mariodeng/. © The Author(s) 2017. Published by Oxford University Press.

  4. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization.

    PubMed

    Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.

  5. Thin Client Architecture for Networking CD-ROMs in a Medium-Sized Public Library System.

    ERIC Educational Resources Information Center

    Turner, Anna

    1997-01-01

    Describes how the Tulsa City-County Library System built a 22-branch CD-ROM-based network with the emerging thin-client/server technology, and succeeded in providing patrons with the most current research tools and information resources available. Discusses costs; networking options; the Citrix WinFrame system used; equipment and connectivity.…

  6. Characteristics and Energy Use of Volume Servers in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuchs, H.; Shehabi, A.; Ganeshalingam, M.

    Servers’ field energy use remains poorly understood, given heterogeneous computing loads, configurable hardware and software, and operation over a wide range of management practices. This paper explores various characteristics of 1- and 2-socket volume servers that affect energy consumption, and quantifies the difference in power demand between higher-performing SPEC and ENERGY STAR servers and our best understanding of a typical server operating today. We first establish general characteristics of the U.S. installed base of volume servers from existing IDC data and the literature, before presenting information on server hardware configurations from data collection events at a major online retail website.more » We then compare cumulative distribution functions of server idle power across three separate datasets and explain the differences between them via examination of the hardware characteristics to which power draw is most sensitive. We find that idle server power demand is significantly higher than ENERGY STAR benchmarks and the industry-released energy use documented in SPEC, and that SPEC server configurations—and likely the associated power-scaling trends—are atypical of volume servers. Next, we examine recent trends in server power draw among high-performing servers across their full load range to consider how representative these trends are of all volume servers before inputting weighted average idle power load values into a recently published model of national server energy use. Finally, we present results from two surveys of IT managers (n=216) and IT vendors (n=178) that illustrate the prevalence of more-efficient equipment and operational practices in server rooms and closets; these findings highlight opportunities to improve the energy efficiency of the U.S. server stock.« less

  7. Technology insertion of a COTS RAID server as an image buffer in the image chain of the Defense Mapping Agency's Digital Production System

    NASA Astrophysics Data System (ADS)

    Mehring, James W.; Thomas, Scott D.

    1995-11-01

    The Data Services Segment of the Defense Mapping Agency's Digital Production System provides a digital archive of imagery source data for use by DMA's cartographic user's. This system was developed in the mid-1980's and is currently undergoing modernization. This paper addresses the modernization of the imagery buffer function that was performed by custom hardware in the baseline system and is being replaced by a RAID Server based on commercial off the shelf (COTS) hardware. The paper briefly describes the baseline DMA image system and the modernization program, that is currently under way. Throughput benchmark measurements were made to make design configuration decisions for a commercial off the shelf (COTS) RAID Server to perform as system image buffer. The test program began with performance measurements of the RAID read and write operations between the RAID arrays and the server CPU for RAID levels 0, 5 and 0+1. Interface throughput measurements were made for the HiPPI interface between the RAID Server and the image archive and processing system as well as the client side interface between a custom interface board that provides the interface between the internal bus of the RAID Server and the Input- Output Processor (IOP) external wideband network currently in place in the DMA system to service client workstations. End to end measurements were taken from the HiPPI interface through the RAID write and read operations to the IOP output interface.

  8. Towards Direct Manipulation and Remixing of Massive Data: The EarthServer Approach

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2012-04-01

    all OGC coverage types. The platform used by EarthServer is the rasdaman raster database system. To exploit heterogeneous multi-parallel platforms, automatic request distribution and orchestration is being established. Client toolkits are under development which will allow to quickly compose bespoke interactive clients, ranging from mobile devices over Web clients to high-end immersive virtual reality. The EarthServer platform has been deployed in six large-scale data centres with the aim of setting up Lighthouse Applications addressing all Earth Sciences, including satellite and airborne earth observation as well as use cases from atmosphere, ocean, snow, and ice monitoring, and geology on Earth and Mars. These services, each of which will ultimately host at least 100 TB, will form a peer cloud with distributed query processing for arbitrarily mixing database and in-situ access. With its ability to directly manipulate, analyze and remix massive data, the goal of EarthServer is to lift the data providers' semantic level from data stewardship to service stewardship.

  9. FirebrowseR: an R client to the Broad Institute’s Firehose Pipeline

    PubMed Central

    Deng, Mario; Brägelmann, Johannes; Kryukov, Ivan; Saraiva-Agostinho, Nuno; Perner, Sven

    2017-01-01

    With its Firebrowse service (http://firebrowse.org/) the Broad Institute is making large-scale multi-platform omics data analysis results publicly available through a Representational State Transfer (REST) Application Programmable Interface (API). Querying this database through an API client from an arbitrary programming environment is an essential task, allowing other developers and researchers to focus on their analysis and avoid data wrangling. Hence, as a first result, we developed a workflow to automatically generate, test and deploy such clients for rapid response to API changes. Its underlying infrastructure, a combination of free and publicly available web services, facilitates the development of API clients. It decouples changes in server software from the client software by reacting to changes in the RESTful service and removing direct dependencies on a specific implementation of an API. As a second result, FirebrowseR, an R client to the Broad Institute’s RESTful Firehose Pipeline, is provided as a working example, which is built by the means of the presented workflow. The package’s features are demonstrated by an example analysis of cancer gene expression data. Database URL: https://github.com/mariodeng/ PMID:28062517

  10. Genomic sequencing in cystic fibrosis newborn screening: what works best, two-tier predefined CFTR mutation panels or second-tier CFTR panel followed by third-tier sequencing?

    PubMed

    Currier, Robert J; Sciortino, Stan; Liu, Ruiling; Bishop, Tracey; Alikhani Koupaei, Rasoul; Feuchtbaum, Lisa

    2017-10-01

    PurposeThe purpose of this study was to model the performance of several known two-tier, predefined mutation panels and three-tier algorithms for cystic fibrosis (CF) screening utilizing the ethnically diverse California population.MethodsThe cystic fibrosis transmembrane conductance regulator (CFTR) mutations identified among the 317 CF cases in California screened between 12 August 2008 and 18 December 2012 were used to compare the expected CF detection rates for several two- and three-tier screening approaches, including the current California approach, which consists of a population-specific 40-mutation panel followed by third-tier sequencing when indicated.ResultsThe data show that the strategy of using third-tier sequencing improves CF detection following an initial elevated immunoreactive trypsinogen and detection of only one mutation on a second-tier panel.ConclusionIn a diverse population, the use of a second-tier panel followed by third-tier CFTR gene sequencing provides a better detection rate for CF, compared with the use of a second-tier approach alone, and is an effective way to minimize the referrals of CF carriers for sweat testing. Restricting screening to a second-tier testing to predefined mutation panels, even broad ones, results in some missed CF cases and demonstrates the limited utility of this approach in states that have diverse multiethnic populations.

  11. A Responsive Client for Distributed Visualization

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Jensen, P. A.; Erlebacher, G.; Yuen, D. A.; Momsen, A. R.

    2006-12-01

    As grids, web services and distributed computing continue to gain popularity in the scientific community, demand for virtual laboratories likewise increases. Today organizations such as the Virtual Laboratory for Earth and Planetary Sciences (VLab) are dedicated to developing web-based portals to perform various simulations remotely while abstracting away details of the underlying computation. Two of the biggest challenges in portal- based computing are fast visualization and smooth interrogation without over taxing clients resources. In response to this challenge, we have expanded on our previous data storage strategy and thick client visualization scheme [1] to develop a client-centric distributed application that utilizes remote visualization of large datasets and makes use of the local graphics processor for improved interactivity. Rather than waste precious client resources for visualization, a combination of 3D graphics and 2D server bitmaps are used to simulate the look and feel of local rendering. Java Web Start and Java Bindings for OpenGL enable install-on- demand functionality as well as low level access to client graphics for all platforms. Powerful visualization services based on VTK and auto-generated by the WATT compiler [2] are accessible through a standard web API. Data is permanently stored on compute nodes while separate visualization nodes fetch data requested by clients, caching it locally to prevent unnecessary transfers. We will demonstrate application capabilities in the context of simulated charge density visualization within the VLab portal. In addition, we will address generalizations of our application to interact with a wider number of WATT services and performance bottlenecks. [1] Ananthuni, R., Karki, B.B., Bollig, E.F., da Silva, C.R.S., Erlebacher, G., "A Web-Based Visualization and Reposition Scheme for Scientific Data," In Press, Proceedings of the 2006 International Conference on Modeling Simulation and Visualization Methods (MSV

  12. Tier-specific evolution of match performance characteristics in the English Premier League: it's getting tougher at the top.

    PubMed

    Bradley, Paul S; Archer, David T; Hogg, Bob; Schuth, Gabor; Bush, Michael; Carling, Chris; Barnes, Chris

    2016-01-01

    This study investigated the evolution of physical and technical performances in the English Premier League (EPL), with special reference to league ranking. Match performance observations (n = 14,700) were collected using a multiple-camera computerised tracking system across seven consecutive EPL seasons (2006-07 to 2012-13). Final league rankings were classified into Tiers: (A) 1st-4th ranking (n = 2519), (B) 5th-8th ranking (n = 2965), (C) 9th-14th ranking (n = 4448) and (D) 15th-20th ranking (n = 4768). Teams in Tier B demonstrated moderate increases in high-intensity running distance while in ball possession from the 2006-07 to 2012-13 season (P < 0.001; effect size [ES]: 0.68), with Tiers A, C and D producing less pronounced increases across the same period (P < 0.005; ES: 0.26, 0.41 and 0.33, respectively). Large increases in sprint distance were observed from the 2006-07 to 2012-13 season for Tier B (P < 0.001; ES: 1.21), while only moderate increases were evident for Tiers A, C and D (P < 0.001; ES: 0.75, 0.97 and 0.84, respectively). Tier B demonstrated large increases in the number of passes performed and received in 2012-13 compared to 2006-07 (P < 0.001; ES: 1.32-1.53) with small-to-moderate increases in Tier A (P < 0.001; ES: 0.30-0.38), Tier C (P < 0.001; ES: 0.46-0.54) and Tier D (P < 0.001; ES: 0.69-0.87). The demarcation line between 4th (bottom of Tier A) and 5th ranking (top of Tier B) in the 2006-07 season was 8 points, but this decreased to just a single point in the 2012-13 season. The data demonstrate that physical and technical performances have evolved more in Tier B than any other Tier in the EPL and could indicate a narrowing of the performance gap between the top two Tiers.

  13. 40 CFR 87.23 - Exhaust emission standards for Tier 6 and Tier 8 engines.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and Tier 8 engines. 87.23 Section 87.23 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM AIRCRAFT AND AIRCRAFT ENGINES Exhaust Emissions (New Aircraft Gas Turbine Engines) § 87.23 Exhaust emission standards for Tier 6 and Tier 8...

  14. 40 CFR 87.23 - Exhaust emission standards for Tier 6 and Tier 8 engines.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... and Tier 8 engines. 87.23 Section 87.23 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM AIRCRAFT AND AIRCRAFT ENGINES Exhaust Emissions (New Aircraft Gas Turbine Engines) § 87.23 Exhaust emission standards for Tier 6 and Tier 8...

  15. Proposal and Implementation of SSH Client System Using Ajax

    NASA Astrophysics Data System (ADS)

    Kosuda, Yusuke; Sasaki, Ryoichi

    Technology called Ajax gives web applications the functionality and operability of desktop applications. In this study, we propose and implement a Secure Shell (SSH) client system using Ajax, independent of the OS or Java execution environment. In this system, SSH packets are generated on a web browser by using JavaScript and a web server works as a proxy in communication with an SSH server to realize end-to-end SSH communication. We implemented a prototype program and confirmed by experiment that it runs on several web browsers and mobile phones. This system has enabled secure SSH communication from a PC at an Internet cafe or any mobile phone. By measuring the processing performance, we verified satisfactory performance for emergency use, although the speed was unsatisfactory in some cases with mobile phone. The system proposed in this study will be effective in various fields of E-Business.

  16. Early Attrition among Suicidal Clients

    ERIC Educational Resources Information Center

    Surgenor, P. W. G.; Meehan, V.; Moore, A.

    2016-01-01

    The study aimed to identify the level of suicidal ideation in early attrition clients and their reasons for the early termination of their therapy. The cross-sectional design involved early attrition clients (C[subscript A]) who withdrew from therapy before their second session (n = 61), and continuing clients who (C[subscript C]) progressed…

  17. Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

    NASA Astrophysics Data System (ADS)

    Letts, J.; Magini, N.

    2011-12-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model, and already represents an important component of CMS PhEDEx data transfer volume. The experience, challenges and methods used to debug and commission the thousands of data transfers links between CMS Tier-2 sites world-wide are explained and summarized. The resulting operational experience with Tier-2 to Tier-2 transfers is also presented.

  18. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization

    PubMed Central

    Bernal-Rusiel, Jorge L.; Rannou, Nicolas; Gollub, Randy L.; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E.; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView, a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution. PMID:28507515

  19. Video personalization for usage environment

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2002-07-01

    A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.

  20. Neutralization tiers of HIV-1

    PubMed Central

    Montefiori, David C.; Roederer, Mario; Morris, Lynn; Seaman, Michael S.

    2018-01-01

    Purpose of review HIV-1 isolates are often classified on the basis of neutralization ‘tier’ phenotype. Tier classification has important implications for the monitoring and interpretation of vaccine-elicited neutralizing antibody responses. The molecular basis that distinguishes the multiple neutralization phenotypes of HIV-1 has been unclear. We present a model based on the dynamic nature of the HIV-1 envelope glycoproteins and its impact on epitope exposure. We also describe a new approach for ranking HIV-1 vaccine-elicited neutralizing antibody responses. Recent findings The unliganded trimeric HIV-1 envelope glycoprotein spike spontaneously transitions through at least three conformations. Neutralization tier phenotypes correspond to the frequency by which the trimer exists in a closed (tiers 2 and 3), open (tier 1A), or intermediate (tier 1B) conformation. An increasing number of epitopes become exposed as the trimer opens, making the virus more sensitive to neutralization by certain antibodies. The closed conformation is stabilized by many broadly neutralizing antibodies. Summary The tier 2 neutralization phenotype is typical of most circulating strains and is associated with a predominantly closed Env trimer configuration that is a high priority to target with vaccines. Assays with tier 1A viruses should be interpreted with caution and with the understanding that they detect many antibody specificities that do not neutralize tier 2 viruses and do not protect against HIV-1 infection. PMID:29266013

  1. Tiered Pricing: Implications for Library Collections

    ERIC Educational Resources Information Center

    Hahn, Karla

    2005-01-01

    In recent years an increasing number of publishers have adopted tiered pricing of journals. The design and implications of tiered-pricing models, however, are poorly understood. Tiered pricing can be modeled using several variables. A survey of current tiered-pricing models documents the range of key variables used. A sensitivity analysis…

  2. Optimal control of M/M/1 two-phase queueing system with state-dependent arrival rate, server breakdowns, delayed repair, and N-policy

    NASA Astrophysics Data System (ADS)

    Rao, Hanumantha; Kumar, Vasanta; Srinivasa Rao, T.; Srinivasa Kumar, B.

    2018-04-01

    In this paper, we examine a two-stage queueing system where the arrivals are Poisson with rate depends on the condition of the server to be specific: vacation, pre-service, operational or breakdown state. The service station is liable to breakdowns and deferral in repair because of non-accessibility of the repair facility. The service is in two basic stages, the first being bulk service to every one of the customers holding up on the line and the second stage is individual to each of them. The server works under N-policy. The server needs preliminary time (startup time) to begin batch service after a vacation period. Startup times, uninterrupted service times, the length of each vacation period, delay times and service times follows an exponential distribution. The closed form of expressions for the mean system size at different conditions of the server is determined. Numerical investigations are directed to concentrate the impact of the system parameters on the ideal limit N and the minimum base expected unit cost.

  3. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical Facility Security Program § 27.220 Tiering. (a) Preliminary Determination of Risk-Based Tiering. Based on...

  4. An Adaptive Priority Tuning System for Optimized Local CPU Scheduling using BOINC Clients

    NASA Astrophysics Data System (ADS)

    Mnaouer, Adel B.; Ragoonath, Colin

    2010-11-01

    Volunteer Computing (VC) is a Distributed Computing model which utilizes idle CPU cycles from computing resources donated by volunteers who are connected through the Internet to form a very large-scale, loosely coupled High Performance Computing environment. Distributed Volunteer Computing environments such as the BOINC framework is concerned mainly with the efficient scheduling of the available resources to the applications which require them. The BOINC framework thus contains a number of scheduling policies/algorithms both on the server-side and on the client which work together to maximize the available resources and to provide a degree of QoS in an environment which is highly volatile. This paper focuses on the BOINC client and introduces an adaptive priority tuning client side middleware application which improves the execution times of Work Units (WUs) while maintaining an acceptable Maximum Response Time (MRT) for the end user. We have conducted extensive experimentation of the proposed system and the results show clear speedup of BOINC applications using our optimized middleware as opposed to running using the original BOINC client.

  5. Tier 1 and Tier 2 Early Intervention for Handwriting and Composing

    ERIC Educational Resources Information Center

    Berninger, Virginia W.; Rutberg, Judith E.; Abbott, Robert D.; Garcia, Noelia; Anderson-Youngstrom, Marci; Brooks, Allison; Fulton, Cynthia

    2006-01-01

    Three studies evaluated Tier 1 early intervention for handwriting at a critical period for literacy development in first grade and one study evaluated Tier 2 early intervention in the critical period between third and fourth grades for composing on high stakes tests. The results contribute to knowledge of research-supported handwriting and…

  6. Automatic Identification of Application I/O Signatures from Noisy Server-Side Traces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yang; Gunasekaran, Raghul; Ma, Xiaosong

    2014-01-01

    Competing workloads on a shared storage system cause I/O resource contention and application performance vagaries. This problem is already evident in today s HPC storage systems and is likely to become acute at exascale. We need more interaction between application I/O requirements and system software tools to help alleviate the I/O bottleneck, moving towards I/O-aware job scheduling. However, this requires rich techniques to capture application I/O characteristics, which remain evasive in production systems. Traditionally, I/O characteristics have been obtained using client-side tracing tools, with drawbacks such as non-trivial instrumentation/development costs, large trace traffic, and inconsistent adoption. We present a novelmore » approach, I/O Signature Identifier (IOSI), to characterize the I/O behavior of data-intensive applications. IOSI extracts signatures from noisy, zero-overhead server-side I/O throughput logs that are already collected on today s supercomputers, without interfering with the compiling/execution of applications. We evaluated IOSI using the Spider storage system at Oak Ridge National Laboratory, the S3D turbulence application (running on 18,000 Titan nodes), and benchmark-based pseudo-applications. Through our ex- periments we confirmed that IOSI effectively extracts an application s I/O signature despite significant server-side noise. Compared to client-side tracing tools, IOSI is transparent, interface-agnostic, and incurs no overhead. Compared to alternative data alignment techniques (e.g., dynamic time warping), it offers higher signature accuracy and shorter processing time.« less

  7. NSLS-II HIGH LEVEL APPLICATION INFRASTRUCTURE AND CLIENT API DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, G.; Yang; L.

    2011-03-28

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. It is an open structure platform, and we try to provide a narrow API set for client application. With this narrow API, existing applications developed in different language under different architecture could be ported to our platform with small modification. This paper describes system infrastructure design, client API and system integration, and latest progress. As a new 3rd generation synchrotron light source with ultra low emittance, there are new requirements and challenges to control and manipulate themore » beam. A use case study and a theoretical analysis have been performed to clarify requirements and challenges to the high level applications (HLA) software environment. To satisfy those requirements and challenges, adequate system architecture of the software framework is critical for beam commissioning, study and operation. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating, plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology. The HLA is combination of tools for accelerator physicists and operators, which is same as traditional approach. In NSLS-II, they include monitoring applications and control routines. Scripting environment is very important for the later part of HLA and both parts are designed based on a common set of APIs. Physicists and operators are users of these APIs, while control system engineers and a few accelerator physicists are the developers of these APIs. With our Client/Server mode based approach, we leave how to retrieve information to the

  8. Effects of Tier 2 and Tier 3 Mathematics Interventions for Second Graders with Mathematics Difficulties

    ERIC Educational Resources Information Center

    Dennis, Minyi Shih

    2015-01-01

    Two studies were conducted to examine the effects of Tier 2 and Tier 3 mathematics interventions on students with mathematics learning difficulties. In the first study, the work of Bryant et al. was replicated and expanded upon by documenting the sustained effects of a Tier 2 mathematics intervention on mathematics performance by second graders.…

  9. A client/server system for Internet access to biomedical text/image databanks.

    PubMed

    Thoma, G R; Long, L R; Berman, L E

    1996-01-01

    Internet access to mixed text/image databanks is finding application in the medical world. An example is a database of medical X-rays and associated data consisting of demographic, socioeconomic, physician's exam, medical laboratory and other information collected as part of a nationwide health survey conducted by the government. Another example is a collection of digitized cryosection images, CT and MR taken of cadavers as part of the National Library of Medicine's Visible Human Project. In both cases, the challenge is to provide access to both the image and the associated text for a wide end user community to create atlases, conduct epidemiological studies, to develop image-specific algorithms for compression, enhancement and other types of image processing, among many other applications. The databanks mentioned above are being created in prototype form. This paper describes the prototype system developed for the archiving of the data and the client software to enable a broad range of end users to access the archive, retrieve text and image data, display the data and manipulate the images. System design considerations include; data organization in a relational database management system with object-oriented extensions; a hierarchical organization of the image data by different resolution levels for different user classes; client design based on common hardware and software platforms incorporating SQL search capability, X Window, Motif and TAE (a development environment supporting rapid prototyping and management of graphic-oriented user interfaces); potential to include ultra high resolution display monitors as a user option; intuitive user interface paradigm for building complex queries; and contrast enhancement, magnification and mensuration tools for better viewing by the user.

  10. The Application of a Three-Tier Model of Intervention to Parent Training

    PubMed Central

    Phaneuf, Leah; McIntyre, Laura Lee

    2015-01-01

    A three-tier intervention system was designed for use with parents with preschool children with developmental disabilities to modify parent–child interactions. A single-subject changing-conditions design was used to examine the utility of a three-tier intervention system in reducing negative parenting strategies, increasing positive parenting strategies, and reducing child behavior problems in parent–child dyads (n = 8). The three intervention tiers consisted of (a) self-administered reading material, (b) group training, and (c) individualized video feedback sessions. Parental behavior was observed to determine continuation or termination of intervention. Results support the utility of a tiered model of intervention to maximize treatment outcomes and increase efficiency by minimizing the need for more costly time-intensive interventions for participants who may not require them. PMID:26213459

  11. How to securely replicate services

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth

    1992-01-01

    A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by n servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter k, at least k servers are correct and fewer than k servers are corrupt. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires fewer than k servers to be corrupt and that is live if at least k+b servers are correct, where b is the assumed maximum total number of corrupt servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service. The practicality of these schemes is illustrated through a discussion of several issues pertinent to their implementation and use, and their intended role in a secure version of the Isis system is also described.

  12. The Live Access Server - A Web-Services Framework for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Schweitzer, R.; Hankin, S. C.; Callahan, J. S.; O'Brien, K.; Manke, A.; Wang, X. Y.

    2005-12-01

    The Live Access Server (LAS) is a general purpose Web-server for delivering services related to geo-science data sets. Data providers can use the LAS architecture to build custom Web interfaces to their scientific data. Users and client programs can then access the LAS site to search the provider's on-line data holdings, make plots of data, create sub-sets in a variety of formats, compare data sets and perform analysis on the data. The Live Access server software has continued to evolve by expanding the types of data (in-situ observations and curvilinear grids) it can serve and by taking advantages of advances in software infrastructure both in the earth sciences community (THREDDS, the GrADS Data Server, the Anagram framework and Java netCDF 2.2) and in the Web community (Java Servlet and the Apache Jakarta frameworks). This presentation will explore the continued evolution of the LAS architecture towards a complete Web-services-based framework. Additionally, we will discuss the redesign and modernization of some of the support tools available to LAS installers. Soon after the initial implementation, the LAS architecture was redesigned to separate the components that are responsible for the user interaction (the User Interface Server) from the components that are responsible for interacting with the data and producing the output requested by the user (the Product Server). During this redesign, we changed the implementation of the User Interface Server from CGI and JavaScript to the Java Servlet specification using Apache Jakarta Velocity backed by a database store for holding the user interface widget components. The User Interface server is now quite flexible and highly configurable because we modernized the components used for the implementation. Meanwhile, the implementation of the Product Server has remained a Perl CGI-based system. Clearly, the time has come to modernize this part of the LAS architecture. Before undertaking such a modernization it is

  13. Regulatory Compliance in Multi-Tier Supplier Networks

    NASA Technical Reports Server (NTRS)

    Goossen, Emray R.; Buster, Duke A.

    2014-01-01

    Over the years, avionics systems have increased in complexity to the point where 1st tier suppliers to an aircraft OEM find it financially beneficial to outsource designs of subsystems to 2nd tier and at times to 3rd tier suppliers. Combined with challenging schedule and budgetary pressures, the environment in which safety-critical systems are being developed introduces new hurdles for regulatory agencies and industry. This new environment of both complex systems and tiered development has raised concerns in the ability of the designers to ensure safety considerations are fully addressed throughout the tier levels. This has also raised questions about the sufficiency of current regulatory guidance to ensure: proper flow down of safety awareness, avionics application understanding at the lower tiers, OEM and 1st tier oversight practices, and capabilities of lower tier suppliers. Therefore, NASA established a research project to address Regulatory Compliance in a Multi-tier Supplier Network. This research was divided into three major study efforts: 1. Describe Modern Multi-tier Avionics Development 2. Identify Current Issues in Achieving Safety and Regulatory Compliance 3. Short-term/Long-term Recommendations Toward Higher Assurance Confidence This report presents our findings of the risks, weaknesses, and our recommendations. It also includes a collection of industry-identified risks, an assessment of guideline weaknesses related to multi-tier development of complex avionics systems, and a postulation of potential modifications to guidelines to close the identified risks and weaknesses.

  14. Obsolescence Considerations for Materials in the Lower Sub-Tiers of the Supply Chain

    DTIC Science & Technology

    2015-04-01

    I N S T I T U T E F O R D E F E N S E A N A L Y S E S Obsolescence Considerations for Materials in the Lower Sub-Tiers of the Supply...REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Obsolescence Considerations for Materials in the Lower Sub-Tiers...Programs are generally unaware of risks for a material obsolescence lurking within the supply chain and by the time that the issue impacts an item

  15. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    NASA Astrophysics Data System (ADS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-12-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  16. Three-tier rough superhydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Cao, Yuanzhi; Yuan, Longyan; Hu, Bin; Zhou, Jun

    2015-08-01

    A three-tier rough superhydrophobic surface was fabricated by growing hydrophobic modified (fluorinated silane) zinc oxide (ZnO)/copper oxide (CuO) hetero-hierarchical structures on silicon (Si) micro-pillar arrays. Compared with the other three control samples with a less rough tier, the three-tier surface exhibits the best water repellency with the largest contact angle 161° and the lowest sliding angle 0.5°. It also shows a robust Cassie state which enables the water to flow with a speed over 2 m s-1. In addition, it could prevent itself from being wetted by the droplet with low surface tension (mixed water and ethanol 1:1 in volume) which reveals a flow speed of 0.6 m s-1 (dropped from the height of 2 cm). All these features prove that adding another rough tier on a two-tier rough surface could futher improve its water-repellent properties.

  17. Three-tier rough superhydrophobic surfaces.

    PubMed

    Cao, Yuanzhi; Yuan, Longyan; Hu, Bin; Zhou, Jun

    2015-08-07

    A three-tier rough superhydrophobic surface was fabricated by growing hydrophobic modified (fluorinated silane) zinc oxide (ZnO)/copper oxide (CuO) hetero-hierarchical structures on silicon (Si) micro-pillar arrays. Compared with the other three control samples with a less rough tier, the three-tier surface exhibits the best water repellency with the largest contact angle 161° and the lowest sliding angle 0.5°. It also shows a robust Cassie state which enables the water to flow with a speed over 2 m s(-1). In addition, it could prevent itself from being wetted by the droplet with low surface tension (mixed water and ethanol 1:1 in volume) which reveals a flow speed of 0.6 m s(-1) (dropped from the height of 2 cm). All these features prove that adding another rough tier on a two-tier rough surface could futher improve its water-repellent properties.

  18. A distributed Tier-1

    NASA Astrophysics Data System (ADS)

    Fischer, L.; Grønager, M.; Kleist, J.; Smirnova, O.

    2008-07-01

    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: firstly, it is not located at one or a few premises, but instead is distributed throughout the Nordic countries; secondly, it is not under the governance of a single organization but instead is a meta-center built of resources under the control of a number of different national organizations. We present some technical implications of these aspects as well as the high-level design of this distributed Tier-1. The focus will be on computing services, storage and monitoring.

  19. THttpServer class in ROOT

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, Joern; Linev, Sergey

    2015-12-01

    The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.

  20. Mathematics Intervention for First- and Second-Grade Students with Mathematics Difficulties: The Effects of Tier 2 Intervention Delivered as Booster Lessons

    ERIC Educational Resources Information Center

    Bryant, Diane Pedrotty; Bryant, Brian R.; Gersten, Russell; Scammacca, Nancy; Chavez, Melissa M.

    2008-01-01

    This study sought to examine the effects of Tier 2 intervention in a multitiered model on the performance of first- and second-grade students who were identified as having mathematics difficulties. A regression discontinuity design was utilized. Participants included 126 (Tier 2, n = 26) first graders and 140 (Tier 2, n = 25) second graders. Tier…

  1. Middle School Students' Responses to Two-Tier Tasks

    ERIC Educational Resources Information Center

    Haja, Shajahan; Clarke, David

    2011-01-01

    The structure of two-tier testing is such that the first tier consists of a multiple-choice question and the second tier requires justifications for choices of answers made in the first tier. This study aims to evaluate two-tier tasks in "proportion" in terms of students' capacity to write and select justifications and to examine the effect of…

  2. An introduction to indigenous health and culture: the first tier of the Three Tiered Plan.

    PubMed

    Sinnott, M J; Wittmann, B

    2001-06-01

    The objective of the present study was to prepare new doctors with an awareness of cultural and health issues to facilitate positive experiences with indigenous patients. The study incorporated the 1998 intern orientation programs in Queensland public hospitals. The study method included tier one of the Three Tiered Plan, which was implemented and audited. Indigenous liaison officers, directors of clinical training and medical education officers were surveyed prior to this implementation to determine whether any or similar initiatives had been carried out in previous years and/or were planned. Post-implementation feedback from interns was obtained by using questionnaires. Follow-up telephone interviews with the directors of clinical training, medical education officers and indigenous hospital liaison officers detailed the format and content of tier one at each hospital. The results indicate that this active intervention improved the implementation rate of tier one from nine of 19 (47%) Queensland public hospitals in 1997 to 17 (90%) in 1998. The 14 indigenous hospital liaison officers (100%) involved in the intervention perceived it as beneficial. Forty-three (67%) of interns who responded to the survey indicated they had encountered an indigenous patient within the last 2-4 months. The level of knowledge of indigenous health and culture self-reported by interns was between the categories 'enough to get by' and 'inadequate'. In conclusion, it appears that tier one has been successful and is to be a formal component of intern orientations in Queensland public hospitals. Further initiatives in indigenous health and culture targeting medical staff (i.e. tier two and tier three), are needed.

  3. Tier 3 Toxicity Value White Paper

    EPA Pesticide Factsheets

    The purpose of this white paper is to articulate the issues pertaining to Tier 3 toxicity values and provide recommendations on processes that will improve the transparency and consistency of identifying, evaluating, selecting, and documenting Tier 3 toxicity values for use in the Superfund and Resource Conservation and Recovery Act (RCRA) programs. This white paper will be used to assist regional risk assessors in selecting Tier 3 toxicity values as well as provide the foundation for future regional and national efforts to improve guidance and policy on Tier 3 toxicity values.

  4. Towards more stable operation of the Tokyo Tier2 center

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Mashimo, T.; Matsui, N.; Sakamoto, H.; Ueda, I.

    2014-06-01

    The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines

  5. Web-based access to near real-time and archived high-density time-series data: cyber infrastructure challenges & developments in the open-source Waveform Server

    NASA Astrophysics Data System (ADS)

    Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.

    2010-12-01

    The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.

  6. BitTorious volunteer: server-side extensions for centrally-managed volunteer storage in BitTorrent swarms.

    PubMed

    Lee, Preston V; Dinu, Valentin

    2015-11-04

    Our publication of the BitTorious portal [1] demonstrated the ability to create a privatized distributed data warehouse of sufficient magnitude for real-world bioinformatics studies using minimal changes to the standard BitTorrent tracker protocol. In this second phase, we release a new server-side specification to accept anonymous philantropic storage donations by the general public, wherein a small portion of each user's local disk may be used for archival of scientific data. We have implementated the server-side announcement and control portions of this BitTorrent extension into v3.0.0 of the BitTorious portal, upon which compatible clients may be built. Automated test cases for the BitTorious Volunteer extensions have been added to the portal's v3.0.0 release, supporting validation of the "peer affinity" concept and announcement protocol introduced by this specification. Additionally, a separate reference implementation of affinity calculation has been provided in C++ for informaticians wishing to integrate into libtorrent-based projects. The BitTorrent "affinity" extensions as provided in the BitTorious portal reference implementation allow data publishers to crowdsource the extreme storage prerequisites for research in "big data" fields. With sufficient awareness and adoption of BitTorious Volunteer-based clients by the general public, the BitTorious portal may be able to provide peta-scale storage resources to the scientific community at relatively insignificant financial cost.

  7. 50 CFR 86.53 - What are funding tiers?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 9 2013-10-01 2013-10-01 false What are funding tiers? 86.53 Section 86... (BIG) PROGRAM How States Apply for Grants § 86.53 What are funding tiers? (a) This grant program will consist of two tiers of funding. (i) You may apply for one or both tiers. (ii) The two tiers will allow...

  8. Development of a tier 1 R5 clade C simian-human immunodeficiency virus as a tool to test neutralizing antibody-based immunoprophylaxis.

    PubMed

    Siddappa, Nagadenahalli B; Hemashettar, Girish; Wong, Yin Ling; Lakhashe, Samir; Rasmussen, Robert A; Watkins, Jennifer D; Novembre, Francis J; Villinger, François; Else, James G; Montefiori, David C; Ruprecht, Ruth M

    2011-04-01

    While some recently transmitted HIV clade C (HIV-C) strains exhibited tier 1 neutralization phenotypes, most were tier 2 strains (J Virol 2010; 84:1439). Because induction of neutralizing antibodies (nAbs) through vaccination against tier 2 viruses has proven difficult, we have generated a tier 1, clade C simian-human immunodeficiency virus (SHIV-C) to permit efficacy testing of candidate AIDS vaccines against tier 1 viruses. SHIV-1157ipEL was created by swapping env of a late-stage virus with that of a tier 1, early form. After adaptation to rhesus macaques (RM), passaged SHIV-1157ipEL-p replicated vigorously in vitro and in vivo while maintaining R5 tropism. The virus was reproducibly transmissible intrarectally. Phylogenetically, SHIV-1157ipEL-p Env clustered with HIV-C sequences. All RM chronically infected with SHIV-1157ipEL-p developed high nAb titers against autologous as well as heterologous tier 1 strains. SHIV-1157ipEL-p was reproducibly transmitted in RM, induced cross-clade nAbs, and represents a tool to evaluate anti-HIV-C nAb responses in primates. © 2010 John Wiley & Sons A/S.

  9. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  10. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  11. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  12. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  13. 40 CFR 79.54 - Tier 3.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... emission control equipment. (3) A manufacturer or group may be required to conduct biological and/or... Requiring Tier 3 Testing. (1) Tier 3 testing shall be required of a manufacturer or group of manufacturers... products. Tier 3 testing may be conducted either on an individual basis or a group basis. If performed on a...

  14. Optimizing Libraries’ Content Findability Using Simple Object Access Protocol (SOAP) With Multi-Tier Architecture

    NASA Astrophysics Data System (ADS)

    Lahinta, A.; Haris, I.; Abdillah, T.

    2017-03-01

    The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.

  15. Comparison of one-tier and two-tier newborn screening metrics for congenital adrenal hyperplasia.

    PubMed

    Sarafoglou, Kyriakie; Banks, Kathryn; Gaviglio, Amy; Hietala, Amy; McCann, Mark; Thomas, William

    2012-11-01

    Newborn screening (NBS) for the classic forms of congenital adrenal hyperplasia (CAH) is mandated in all states in the United States. Compared with other NBS disorders, the false-positive rate (FPR) of CAH screening remains high and has not been significantly improved by adjusting 17α-hydroxyprogesterone cutoff values for birth weight and/or gestational age. Minnesota was the first state to initiate, and only 1 of 4 states currently performing, second-tier steroid profiling for CAH. False-negative rates (FNRs) for CAH are not well known. This is a population-based study of all Minnesota infants (769,834) born 1999-2009, grouped by screening protocol (one-tier with repeat screen, January 1999 to May 2004; two-tier with second-tier steroid profiling, June 2004 to December 2009). FPR, FNR, and positive predictive value (PPV) were calculated per infant, rather than per sample, and compared between protocols. Overall, 15 false-negatives (4 salt-wasting, 11 simple-virilizing) and 45 true-positives were identified from 1999 to 2009. With two-tier screening, FNR was 32%, FPR increased to 0.065%, and PPV decreased to 8%, but these changes were not statistically significant. Second-tier steroid profiling obviated repeat screens of borderline results (355 per year average). In comparing the 2 screening protocols, the FPR of CAH NBS remains high, the PPV remains low, and false-negatives occur more frequently than has been reported. Physicians should be cautioned that a negative NBS does not necessarily rule out classic CAH; therefore, any patient for whom there is clinical concern for CAH should receive immediate diagnostic testing.

  16. Examining the Predictive Validity of a Dynamic Assessment of Decoding to Forecast Response to Tier 2 Intervention

    ERIC Educational Resources Information Center

    Cho, Eunsoo; Compton, Donald L.; Fuchs, Douglas; Fuchs, Lynn S.; Bouton, Bobette

    2014-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in…

  17. Mobile Assisted Security in Wireless Sensor Networks

    DTIC Science & Technology

    2015-08-03

    server from Google’s DNS, Chromecast and the content server does the 3-way TCP Handshake which is followed by Client Hello and Server Hello TLS messages...utilized TLS v1.2, except NTP servers and google’s DNS server. In the TLS v1.2, after handshake, client and server sends Client Hello and Server Hello ...Messages in order. In Client Hello messages, client offers a list of Cipher Suites that it supports. Each Cipher Suite defines the key exchange algorithm

  18. 40 CFR 87.23 - Exhaust emission standards for Tier 6 and Tier 8 engines.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Exhaust emission standards for Tier 6 and Tier 8 engines. 87.23 Section 87.23 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) Definitions. Exhaust Emissions (New Aircraft Gas Turbine Engines) § 87...

  19. Think They're Drunk? Alcohol Servers and the Identification of Intoxication.

    ERIC Educational Resources Information Center

    Burns, Edward D.; Nusbaumer, Michael R.; Reiling, Denise M.

    2003-01-01

    Examines practices used by servers to assess intoxication. The analysis was based upon questionnaires mailed to a random probability sample of licensed servers from one state (N = 822). Indicators found to be most important were examined in relation to a variety of occupational characteristics. Implications for training curricula, policy…

  20. Advanced 3-D analysis, client-server systems, and cloud computing-Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement.

    PubMed

    Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen

    2013-06-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.

  1. Implementation Challenges for Tier One and Tier Two School-Based Programs for Early Adolescents

    ERIC Educational Resources Information Center

    LaRusso, Maria D.; Donovan, Suzanne; Snow, Catherine

    2016-01-01

    This mixed-method study examined the implementation and the challenges to implementation for participants in randomized controlled trials of two school-based programs for early adolescents: the Tier One Word Generation (WG) program, and the Tier Two Strategic Adolescent Reading Intervention (STARI). Levels of implementation for WG and STARI varied…

  2. Structural Constraints of Vaccine-Induced Tier-2 Autologous HIV Neutralizing Antibodies Targeting the Receptor-Binding Site.

    PubMed

    Bradley, Todd; Fera, Daniela; Bhiman, Jinal; Eslamizar, Leila; Lu, Xiaozhi; Anasti, Kara; Zhang, Ruijung; Sutherland, Laura L; Scearce, Richard M; Bowman, Cindy M; Stolarchuk, Christina; Lloyd, Krissey E; Parks, Robert; Eaton, Amanda; Foulger, Andrew; Nie, Xiaoyan; Karim, Salim S Abdool; Barnett, Susan; Kelsoe, Garnett; Kepler, Thomas B; Alam, S Munir; Montefiori, David C; Moody, M Anthony; Liao, Hua-Xin; Morris, Lynn; Santra, Sampa; Harrison, Stephen C; Haynes, Barton F

    2016-01-05

    Antibodies that neutralize autologous transmitted/founder (TF) HIV occur in most HIV-infected individuals and can evolve to neutralization breadth. Autologous neutralizing antibodies (nAbs) against neutralization-resistant (Tier-2) viruses are rarely induced by vaccination. Whereas broadly neutralizing antibody (bnAb)-HIV-Envelope structures have been defined, the structures of autologous nAbs have not. Here, we show that immunization with TF mutant Envs gp140 oligomers induced high-titer, V5-dependent plasma neutralization for a Tier-2 autologous TF evolved mutant virus. Structural analysis of autologous nAb DH427 revealed binding to V5, demonstrating the source of narrow nAb specificity and explaining the failure to acquire breadth. Thus, oligomeric TF Envs can elicit autologous nAbs to Tier-2 HIVs, but induction of bnAbs will require targeting of precursors of B cell lineages that can mature to heterologous neutralization. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  3. 50 CFR 86.53 - What are funding tiers?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false What are funding tiers? 86.53 Section 86... project merits. (d) We describe the two tiers as follows: (1) Tier One Projects. (i) You may submit a... $100,000 of Federal funds for any given fiscal year. (ii) Tier One projects must meet the eligibility...

  4. 47 CFR 76.922 - Rates for the basic service tier and cable programming services tiers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Rates for the basic service tier and cable programming services tiers. 76.922 Section 76.922 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cable Rate Regulation...

  5. Tiered Approach to Resilience Assessment.

    PubMed

    Linkov, Igor; Fox-Lent, Cate; Read, Laura; Allen, Craig R; Arnott, James C; Bellini, Emanuele; Coaffee, Jon; Florin, Marie-Valentine; Hatfield, Kirk; Hyde, Iain; Hynes, William; Jovanovic, Aleksandar; Kasperson, Roger; Katzenberger, John; Keys, Patrick W; Lambert, James H; Moss, Richard; Murdoch, Peter S; Palma-Oliveira, Jose; Pulwarty, Roger S; Sands, Dale; Thomas, Edward A; Tye, Mari R; Woods, David

    2018-04-25

    Regulatory agencies have long adopted a three-tier framework for risk assessment. We build on this structure to propose a tiered approach for resilience assessment that can be integrated into the existing regulatory processes. Comprehensive approaches to assessing resilience at appropriate and operational scales, reconciling analytical complexity as needed with stakeholder needs and resources available, and ultimately creating actionable recommendations to enhance resilience are still lacking. Our proposed framework consists of tiers by which analysts can select resilience assessment and decision support tools to inform associated management actions relative to the scope and urgency of the risk and the capacity of resource managers to improve system resilience. The resilience management framework proposed is not intended to supplant either risk management or the many existing efforts of resilience quantification method development, but instead provide a guide to selecting tools that are appropriate for the given analytic need. The goal of this tiered approach is to intentionally parallel the tiered approach used in regulatory contexts so that resilience assessment might be more easily and quickly integrated into existing structures and with existing policies. Published 2018. This article is a U.S. government work and is in the public domain in the USA.

  6. Single-Tier Testing with the C6 Peptide ELISA Kit Compared with Two-Tier Testing for Lyme Disease

    PubMed Central

    Wormser, Gary P.; Schriefer, Martin; Aguero-Rosenfeld, Maria E.; Levin, Andrew; Steere, Allen C.; Nadelman, Robert B.; Nowakowski, John; Marques, Adriana; Johnson, Barbara J. B.; Dumler, J. Stephen

    2014-01-01

    Background The two-tier serologic testing protocol for Lyme disease has a number of shortcomings including low sensitivity in early disease; increased cost, time and labor; and subjectivity in the interpretation of immunoblots. Methods The diagnostic accuracy of a single-tier commercial C6 ELISA kit was compared with two-tier testing. Results The C6 ELISA was significantly more sensitive than two-tier testing with sensitivities of 66.5% (95% C.I.:61.7-71.1) and 35.2% (95%C.I.:30.6-40.1), respectively (p<0.001) in 403 sera from patients with erythema migrans. The C6 ELISA had sensitivity statistically comparable to two-tier testing in sera from Lyme disease patients with early neurological manifestations (88.6% vs. 77.3%, p=0.13) or arthritis (98.3% vs. 95.6%, p= 0.38). Te specificities of C6 ELISA and two-tier testing in over 2200 blood donors, patients with other conditions, and Lyme disease vaccine recipients were found to be 98.9% and 99.5%, respectively (p<0.05, 95% C.I. surrounding the 0.6 percentage point difference of 0.04 to 1.15). Conclusions Using a reference standard of two-tier testing, the C6 ELISA as a single step serodiagnostic test provided increased sensitivity in early Lyme disease with comparable sensitivity in later manifestations of Lyme disease. The C6 ELISA had slightly decreased specificity. Future studies should evaluate the performance of the C6 ELISA compared with two-tier testing in routine clinical practice. PMID:23062467

  7. Reported Effects of the SRTR 5-tier Rating System on US Transplant Centers: Results of a National Survey.

    PubMed

    Van Pilsum Rasmussen, Sarah E; Thomas, Alvin G; Garonzik-Wang, Jacqueline; Henderson, Macey L; Stith, Sarah S; Segev, Dorry L; Hersch Nicholas, Lauren

    2018-05-26

    In the US, the Scientific Registry of Transplant Recipients (SRTR) provides publicly available quality report cards. These reports have historically rated transplant programs using a 3-tier system. In 2016, the SRTR temporarily transitioned to a 5-tier system, which classified more programs as under-performing. As part of a larger survey about transplant quality metrics, we surveyed members of the American Society of Transplant Surgeons and American Society of Transplantation (N = 280 respondents) on transplant center experiences with patient and payer responses to the 5-tier SRTR ratings. Over half of respondents (n=137, 52.1%) reported ≥1 negative effect of the new 5-tier ranking system, including losing patients, losing insurers, increased concern among patients, and increased concern among referring providers. Few respondents (n=35, 13.7%) reported any positive effects of the 5-tier ranking system. Lower SRTR-reported scores on the 5-tier scale were associated with increased risk of reporting at least one negative effect in a logistic model (p<0.01). The change to a more granular rating system provoked an immediate response in the transplant community that may have long-term implications for transplant hospital finances and patient options for transplantation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. HDF-EOS Web Server

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  9. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical... Risk-Based Tiering. Following review of a covered facility's Security Vulnerability Assessment, the...

  10. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical... Risk-Based Tiering. Following review of a covered facility's Security Vulnerability Assessment, the...

  11. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical... Risk-Based Tiering. Following review of a covered facility's Security Vulnerability Assessment, the...

  12. 6 CFR 27.220 - Tiering.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false Tiering. 27.220 Section 27.220 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical... Risk-Based Tiering. Following review of a covered facility's Security Vulnerability Assessment, the...

  13. 18 CFR 707.9 - Tiering.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Tiering. 707.9 Section 707.9 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water Resources Council Implementing Procedures § 707.9 Tiering. In accordance...

  14. 18 CFR 707.9 - Tiering.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 2 2012-04-01 2012-04-01 false Tiering. 707.9 Section 707.9 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water Resources Council Implementing Procedures § 707.9 Tiering. In accordance...

  15. 18 CFR 707.9 - Tiering.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Tiering. 707.9 Section 707.9 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water Resources Council Implementing Procedures § 707.9 Tiering. In accordance...

  16. 18 CFR 707.9 - Tiering.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 2 2013-04-01 2012-04-01 true Tiering. 707.9 Section 707.9 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water Resources Council Implementing Procedures § 707.9 Tiering. In accordance...

  17. 18 CFR 707.9 - Tiering.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 2 2014-04-01 2014-04-01 false Tiering. 707.9 Section 707.9 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT (NEPA) Water Resources Council Implementing Procedures § 707.9 Tiering. In accordance...

  18. LISA, the next generation: from a web-based application to a fat client.

    PubMed

    Pierlet, Noëlla; Aerts, Werner; Vanautgaerden, Mark; Van den Bosch, Bart; De Deurwaerder, André; Schils, Erik; Noppe, Thomas

    2008-01-01

    The LISA application, developed by the University Hospitals Leuven, permits referring physicians to consult the electronic medical records of their patients over the internet in a highly secure way. We decided to completely change the way we secured the application, discard the existing web application and build a completely new application, based on the in-house developed hospital information system, used in the University Hospitals Leuven. The result is a fat Java client, running on a Windows Terminal Server, secured by a commercial SSL-VPN solution.

  19. Single-tier testing with the C6 peptide ELISA kit compared with two-tier testing for Lyme disease.

    PubMed

    Wormser, Gary P; Schriefer, Martin; Aguero-Rosenfeld, Maria E; Levin, Andrew; Steere, Allen C; Nadelman, Robert B; Nowakowski, John; Marques, Adriana; Johnson, Barbara J B; Dumler, J Stephen

    2013-01-01

    For the diagnosis of Lyme disease, the 2-tier serologic testing protocol for Lyme disease has a number of shortcomings including low sensitivity in early disease; increased cost, time, and labor; and subjectivity in the interpretation of immunoblots. In this study, the diagnostic accuracy of a single-tier commercial C6 ELISA kit was compared with 2-tier testing. The results showed that the C6 ELISA was significantly more sensitive than 2-tier testing with sensitivities of 66.5% (95% confidence interval [CI] 61.7-71.1) and 35.2% (95% CI 30.6-40.1), respectively (P < 0.001) in 403 sera from patients with erythema migrans. The C6 ELISA had sensitivity statistically comparable to 2-tier testing in sera from Lyme disease patients with early neurologic manifestations (88.6% versus 77.3%, P = 0.13) or arthritis (98.3% versus 95.6%, P = 0.38). The specificities of C6 ELISA and 2-tier testing in over 2200 blood donors, patients with other conditions, and Lyme disease vaccine recipients were found to be 98.9% and 99.5%, respectively (P < 0.05, 95% CI surrounding the 0.6 percentage point difference of 0.04 to 1.15). In conclusion, using a reference standard of 2-tier testing, the C6 ELISA as a single-step serodiagnostic test provided increased sensitivity in early Lyme disease with comparable sensitivity in later manifestations of Lyme disease. The C6 ELISA had slightly decreased specificity. Future studies should evaluate the performance of the C6 ELISA compared with 2-tier testing in routine clinical practice. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. SWPBIS Tiered Fidelity Inventory. Version 2.1

    ERIC Educational Resources Information Center

    Algozzine, B.; Barrett, S.; Eber, L.; George, H.; Horner, R.; Lewis, T.; Putnam, B.; Swain-Bradway, J.; McIntosh, K.; Sugai, G.

    2014-01-01

    The purpose of the SWPBIS Tiered Fidelity Inventory (TFI) is to provide a valid, reliable, and efficient measure of the extent to which school personnel are applying the core features of school-wide positive behavioral interventions and supports (SWPBIS). The TFI is divided into three sections (Tier I: Universal SWPBIS Features; Tier II: Targeted…

  1. The impact of tiered physician networks on patient choices.

    PubMed

    Sinaiko, Anna D; Rosenthal, Meredith B

    2014-08-01

    To assess whether patient choice of physician or health plan was affected by physician tier-rankings. Administrative claims and enrollment data on 171,581 nonelderly beneficiaries enrolled in Massachusetts Group Insurance Commission health plans that include a tiered physician network and who had an office visit with a tiered physician. We estimate the impact of tier-rankings on physician market share within a plan of new patients and on the percent of a physician's patients who switch to other physicians with fixed effects regression models. The effect of tiering on consumer plan choice is estimated using logistic regression and a pre-post study design. Physicians in the bottom (least-preferred) tier, particularly certain specialist physicians, had lower market share of new patient visits than physicians with higher tier-rankings. Patients whose physician was in the bottom tier were more likely to switch health plans. There was no effect of tier-ranking on patients switching away from physicians whom they have seen previously. The effect of tiering appears to be among patients who choose new physicians and at the lower end of the distribution of tiered physicians, rather than moving patients to the "best" performers. These findings suggest strong loyalty of patients to physicians more likely to be considered their personal doctor. © Health Research and Educational Trust.

  2. Remote diagnosis server

    NASA Technical Reports Server (NTRS)

    Deb, Somnath (Inventor); Ghoshal, Sudipto (Inventor); Malepati, Venkata N. (Inventor); Kleinman, David L. (Inventor); Cavanaugh, Kevin F. (Inventor)

    2004-01-01

    A network-based diagnosis server for monitoring and diagnosing a system, the server being remote from the system it is observing, comprises a sensor for generating signals indicative of a characteristic of a component of the system, a network-interfaced sensor agent coupled to the sensor for receiving signals therefrom, a broker module coupled to the network for sending signals to and receiving signals from the sensor agent, a handler application connected to the broker module for transmitting signals to and receiving signals therefrom, a reasoner application in communication with the handler application for processing, and responding to signals received from the handler application, wherein the sensor agent, broker module, handler application, and reasoner applications operate simultaneously relative to each other, such that the present invention diagnosis server performs continuous monitoring and diagnosing of said components of the system in real time. The diagnosis server is readily adaptable to various different systems.

  3. Multi-Tiered System of Support: Best Differentiation Practices for English Language Learners in Tier 1

    ERIC Educational Resources Information Center

    Izaguirre, Cecilia

    2017-01-01

    Purpose: This qualitative case study explored the best practices of differentiation of Tier 1 instruction within a multi-tiered system of support for English Language Learners who were predominately Spanish speaking. Theoretical Framework: The zone of proximal development theory, cognitive theory, and the affective filter hypothesis guided this…

  4. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google

  5. Physical Attractiveness: Interactive Effects of Counselor and Client on Counseling Processes.

    ERIC Educational Resources Information Center

    Vargas, Alice M.; Borkowski, John G.

    1983-01-01

    Assessed how the physical attractiveness of counselors and clients interacted to build rapport in two experiments involving college students (N=128 and N=64). Results showed the counselor's physical attractiveness had a major impact on her perceived effectiveness and the client's expectation of success irrespective of the client's attractiveness…

  6. Combining Tier 2 and Tier 3 Supports for Students with Disabilities in General Education Settings

    ERIC Educational Resources Information Center

    MacLeod, K. Sandra; Hawken, Leanne S.; O'Neill, Robert E.; Bundock, Kaitlin

    2016-01-01

    Secondary level or Tier 2 interventions such as the Check-in Check-out (CICO) intervention effectively reduce problem behaviors of students who are non-responsive to school-wide interventions. However, some students will not be successful with Tier 2 interventions. This study investigated the effects of adding individualized function-based support…

  7. Day Hospital and Residential Addiction Treatment: Randomized and Nonrandomized Managed Care Clients

    ERIC Educational Resources Information Center

    Witbrodt, Jane; Bond, Jason; Kaskutas, Lee Ann; Weisner, Constance; Jaeger, Gary; Pating, David; Moore, Charles

    2007-01-01

    Male and female managed care clients randomized to day hospital (n=154) or community residential treatment (n=139) were compared on substance use outcomes at 6 and 12 months. To address possible bias in naturalistic studies, outcomes were also examined for clients who self-selected day hospital (n=321) and for clients excluded from randomization…

  8. SPEER-SERVER: a web server for prediction of protein specificity determining sites

    PubMed Central

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J.; Panchenko, Anna R.; Chakrabarti, Saikat

    2012-01-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. PMID:22689646

  9. SPEER-SERVER: a web server for prediction of protein specificity determining sites.

    PubMed

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J; Panchenko, Anna R; Chakrabarti, Saikat

    2012-07-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids' Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/.

  10. The Time Series Data Server (TSDS) for Standards-Compliant, Convenient, and Efficient Access to Time Series Data

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Weigel, R. S.; Wilson, A.; Ware Dewolfe, A.

    2009-12-01

    Data analysis in the physical sciences is often plagued by the difficulty in acquiring the desired data. A great deal of work has been done in the area of metadata and data discovery, however, many such discoveries simply provide links that lead directly to a data file. Often these files are impractically large, containing more time samples or variables than desired, and are slow to access. Once these files are downloaded, format issues further complicate using the data. Some data servers have begun to address these problems by improving data virtualization and ease of use. However, these services often don't scale to large datasets. Also, the generic nature of the data models used by these servers, while providing greater flexibility, may complicate setting up such a service for data providers and limit sufficient semantics that would otherwise simplify use for clients, machine or human. The Time Series Data Server (TSDS) aims to address these problems within the limited, yet common, domain of time series data. With the simplifying assumption that all data products served are a function of time, the server can optimize for data access based on time subsets, a common use case. The server also supports requests for specific variables, which can be of type scalar, structure, or sequence. It also supports data types with higher level semantics, such as "spectrum." The TSDS is implemented using Java Servlet technology and can be dropped into any servlet container and customized for a data provider's needs. The interface is based on OPeNDAP (http://opendap.org) and conforms to the Data Acces Protocol (DAP) 2.0, a NASA standard (ESDS-RFC-004), which defines a simple HTTP request and response paradigm. Thus a TSDS server instance is a compliant OPeNDAP server that can be accessed by any OPeNDAP client or directly via RESTful web service requests. The TSDS reads the data that it serves into a common data model via the NetCDF Markup Language (NcML, http

  11. Social networks and links to isolation and loneliness among elderly HCBS clients.

    PubMed

    Medvene, Louis J; Nilsen, Kari M; Smith, Rachel; Ofei-Dodoo, Samuel; DiLollo, Anthony; Webster, Noah; Graham, Annette; Nance, Anita

    2016-01-01

    The purpose of this study was to explore the network types of HCBS clients based on the structural characteristics of their social networks. We also examined how the network types were associated with social isolation, relationship quality and loneliness. Forty personal interviews were carried out with HCBS clients to assess the structure of their social networks as indicated by frequency of contact with children, friends, family and participation in religious and community organizations. Hierarchical cluster analysis was conducted to identify network types. Four network types were found including: family (n = 16), diverse (n = 8), restricted (n = 8) and religious (n = 7). Family members comprised almost half of participants' social networks, and friends comprised less than one-third. Clients embedded in family, diverse and religious networks had significantly more positive relationships than clients embedded in restricted networks. Clients embedded in restricted networks had significantly higher social isolation scores and were lonelier than clients in diverse and family networks. The findings suggest that HCBS clients' isolation and loneliness are linked to the types of social networks in which they are embedded. The findings also suggest that clients embedded in restricted networks are at high risk for negative outcomes.

  12. Variations in patient response to tiered physician networks.

    PubMed

    Sinaiko, Anna D

    2016-06-01

    Prior studies found that tiered provider networks channel patients to preferred providers in certain contexts. This paper evaluates whether the effects of tiered physician networks vary for different types of patients. Cross-sectional analysis of fiscal year 2009 to 2010 administrative enrollment and claims data on nonelderly beneficiaries in Massachusetts Group Insurance Commission health plans. Main outcome measures are physician market share among new patients and the percent of physician's patients who switch away. We utilized estimated fixed effects linear regression models that were stratified by patient characteristics. Physicians with the worst tier rankings had lower market share among new patients who are older and sicker, or male, representing losses in market share of 10% and 15%, respectively, than other tiered physicians. A poor tier ranking did not affect physician market share of new patients who are female or younger. There was no effect of a physician's tier ranking on the proportion of patients who switch to other doctors among any groups of patients. Loyalty to their own physicians is pervasive across groups of patients. Physicians with poor tier rankings lost market share among new patients who are older and sicker, and among new male patients. Together, these findings suggest that tiered network designs have the potential for the greatest impact on value in healthcare over time, as more patients seek new relationships with physicians.

  13. Optimal routing of IP packets to multi-homed servers

    NASA Astrophysics Data System (ADS)

    Swartz, K. L.

    1992-08-01

    Multi-homing, or direct attachment to multiple networks, offers both performance and availability benefits for important servers on busy networks. Exploiting these benefits to their fullest requires a modicum of routing knowledge in the clients. Careful policy control must also be reflected in the routing used within the network to make best use of specialized and often scarce resources. While relatively straightforward in theory, this problem becomes much more difficult to solve in a real network containing often intractable implementations from a variety of vendors. This paper presents an analysis of the problem and proposes a useful solution for a typical campus network. Application of this solution at the Stanford Linear Accelerator Center is studied and the problems and pitfalls encountered are discussed, as are the workarounds used to make the system work in the real world.

  14. Mathematical defense method of networked servers with controlled remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2006-05-01

    The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.

  15. Four Tiers

    ERIC Educational Resources Information Center

    Moodie, Gavin

    2009-01-01

    This paper posits a classification of tertiary education institutions into four tiers: world research universities, selecting universities, recruiting universities, and vocational institutes. The distinguishing characteristic of world research universities is their research strength, the distinguishing characteristic of selecting universities is…

  16. Managing Attribute—Value Clinical Trials Data Using the ACT/DB Client—Server Database System

    PubMed Central

    Nadkarni, Prakash M.; Brandt, Cynthia; Frawley, Sandra; Sayward, Frederick G.; Einbinder, Robin; Zelterman, Daniel; Schacter, Lee; Miller, Perry L.

    1998-01-01

    ACT/DB is a client—server database application for storing clinical trials and outcomes data, which is currently undergoing initial pilot use. It stores most of its data in entity—attribute—value form. Such data are segregated according to data type to allow indexing by value when possible, and binary large object data are managed in the same way as other data. ACT/DB lets an investigator design a study rapidly by defining the parameters (or attributes) that are to be gathered, as well as their logical grouping for purposes of display and data entry. ACT/DB generates customizable data entry. The data can be viewed through several standard reports as well as exported as text to external analysis programs. ACT/DB is designed to encourage reuse of parameters across multiple studies and has facilities for dictionary search and maintenance. It uses a Microsoft Access client running on Windows 95 machines, which communicates with an Oracle server running on a UNIX platform. ACT/DB is being used to manage the data for seven studies in its initial deployment. PMID:9524347

  17. Can Knowledge of Client Birth Order Bias Clinical Judgment?

    ERIC Educational Resources Information Center

    Stewart, Allan E.

    2004-01-01

    Clinicians (N = 308) responded to identical counseling vignettes of a male client that differed only in the client's stated birth order. Clinicians developed different impressions about the client and his family experiences that corresponded with the prototypical descriptions of persons from 1 of 4 birth orders (i.e., first, middle, youngest, and…

  18. 26 CFR 1.1446-5 - Tiered partnership structures.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Tiered partnership structures. 1.1446-5 Section...-Free Covenant Bonds § 1.1446-5 Tiered partnership structures. (a) In general. The rules of this section... prescribes rules applicable to a publicly traded partnership in a tiered partnership structure. Paragraph (e...

  19. Systematic Implementation of a Tier 2 Behavior Intervention

    ERIC Educational Resources Information Center

    Carter, Deborah Russell; Carter, Gabriel M.; Johnson, Evelyn S.; Pool, Juli L.

    2013-01-01

    Schools are increasingly adopting tiered models of prevention to meet the needs of diverse populations of students. This article outlines the steps involved in designing and implementing a systematic Tier 2 behavior intervention within a tiered service delivery model. An elementary school example is provided to outline the identification,…

  20. Environmental influences on alcohol consumption practices of alcoholic beverage servers.

    PubMed

    Nusbaumer, Michael R; Reiling, Denise M

    2002-11-01

    Public drinking establishments have long been associated with heavy drinking among both their patrons and servers. Whether these environments represent locations where heavy drinking is learned (learning hypothesis) or simply places where already-heavy drinkers gather in a supportive environment (selection hypothesis) remains an important question. A sample of licensed alcoholic beverage servers in the state of Indiana, USA, was surveyed to better understand the drinking behaviors of servers within the alcohol service industry. Responses (N = 938) to a mailed questionnaire were analyzed to assess the relative influence of environmental and demographic factors on the drinking behavior of servers. Stepwise regression revealed "drinking on the job" as the most influential environmental factor on heavy drinking behaviors, followed by age and gender as influential demographic factors. Support was found for the selection hypothesis, but not for the learning hypothesis. Policy implications are discussed. factors on the drinking behavior of servers. Stepwise regression revealed "drinking on the job" as the most influential environmental factor on heavy drinking behaviors, followed by age and gender as influential demographic factors. Support was found for the selection hypothesis, but not for the learning hypothesis. Policy implications are discussed.

  1. Accelerating chronically unresponsive children to tier 3 instruction: what level of data is necessary to ensure selection accuracy?

    PubMed

    Compton, Donald L; Gilbert, Jennifer K; Jenkins, Joseph R; Fuchs, Douglas; Fuchs, Lynn S; Cho, Eunsoo; Barquero, Laura A; Bouton, Bobette

    2012-01-01

    Response-to-intervention (RTI) approaches to disability identification are meant to put an end to the so-called wait-to-fail requirement associated with IQ discrepancy. However, in an unfortunate irony, there is a group of children who wait to fail in RTI frameworks. That is, they must fail both general classroom instruction (Tier 1) and small-group intervention (Tier 2) before becoming eligible for the most intensive intervention (Tier 3). The purpose of this article was to determine how to predict accurately which at-risk children will be unresponsive to Tiers 1 and 2, thereby allowing unresponsive children to move directly from Tier 1 to Tier 3. As part of an efficacy study of a multitier RTI approach to prevention and identification of reading disabilities (RD), 129 first-grade children who were unresponsive to classroom reading instruction were randomly assigned to 14 weeks of small-group, Tier 2 intervention. Nonresponders to this instruction (n = 33) were identified using local norms on first-grade word identification fluency growth linked to a distal outcome of RD at the end of second grade. Logistic regression models were used to predict membership in responder and nonresponder groups. Predictors were entered as blocks of data from least to most difficult to obtain: universal screening data, Tier 1 response data, norm referenced tests, and Tier 2 response data. Tier 2 response data were not necessary to classify students as responders and nonresponders to Tier 2 instruction, suggesting that some children can be accurately identified as eligible for Tier 3 intervention using only Tier 1 data, thereby avoiding prolonged periods of failure to instruction.

  2. 20 CFR 225.21 - Survivor Tier I PIA.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Survivor Annuities and the Amount of the Residual Lump-Sum Payable § 225.21 Survivor Tier I PIA. The Survivor Tier I PIA is used in computing the tier I... Security Act using the deceased employee's combined railroad and social security earnings after 1950 (or...

  3. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  4. An Examination of Multi-Tier Designs for Legacy Data Access

    DTIC Science & Technology

    1997-12-01

    heterogeneous relational database management systems. The first test system incorporates a two-tier architecture design using Java, and the second system...employs a three-tier architecture design using Java and CORBA. Data on replication times for the two-tier and three-tier designs are presented

  5. The research and implementation of PDM systems based on the .NET platform

    NASA Astrophysics Data System (ADS)

    Gao, Hong-li; Jia, Ying-lian; Yang, Ji-long; Jiang, Wei

    2005-12-01

    A new kind of PDM system scheme based on the .NET platform for solving application problems of the current PDM system applied in an enterprise is described. The key technologies of this system, such as .NET, Accessing Data, information processing, Web, ect., were discussed. The 3-tier architecture of a PDM system based on the C/S and B/S mixed mode was presented. In this system, all users share the same Database Server in order to ensure the coherence and safety of client data. ADO.NET leverages the power of XML to provide disconnected access to data, which frees the connection to be used by other clients. Using this approach, the system performance was improved. Moreover, the important function modules in a PDM system such as project management, product structure management and Document Management module were developed and realized.

  6. The EarthServer Federation: State, Role, and Contribution to GEOSS

    NASA Astrophysics Data System (ADS)

    Merticariu, Vlad; Baumann, Peter

    2016-04-01

    The intercontinental EarthServer initiative has established a European datacube platform with proven scalability: known databases exceed 100 TB, and single queries have been split across more than 1,000 cloud nodes. Its service interface being rigorously based on the OGC "Big Geo Data" standards, Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS), a series of clients can dock into the services, ranging from open-source OpenLayers and QGIS over open-source NASA WorldWind to proprietary ESRI ArcGIS. Datacube fusion in a "mix and match" style is supported by the platform technolgy, the rasdaman Array Database System, which transparently federates queries so that users simply approach any node of the federation to access any data item, internally optimized for minimal data transfer. Notably, rasdaman is part of GEOSS GCI. NASA is contributing its Web WorldWind virtual globe for user-friendly data extraction, navigation, and analysis. Integrated datacube / metadata queries are contributed by CITE. Current federation members include ESA (managed by MEEO sr.l.), Plymouth Marine Laboratory (PML), the European Centre for Medium-Range Weather Forecast (ECMWF), Australia's National Computational Infrastructure, and Jacobs University (adding in Planetary Science). Further data centers have expressed interest in joining. We present the EarthServer approach, discuss its underlying technology, and illustrate the contribution this datacube platform can make to GEOSS.

  7. Factors Associated with Recent Suicide Attempts in Clients Presenting for Addiction Treatment

    ERIC Educational Resources Information Center

    Penney, Alexander; Mazmanian, Dwight; Jamieson, John; Black, Nancy

    2012-01-01

    Factors associated with recent suicide attempts were examined in clients who sought treatment at an addictions facility between 2001 and 2008. Clients who reported being hospitalized for attempting suicide in the past year (n = 76) were compared to all other clients (n = 5914) on demographic, mental health, substance use, and problem gambling…

  8. LabKey Server: an open source platform for scientific data integration, analysis and collaboration.

    PubMed

    Nelson, Elizabeth K; Piehler, Britt; Eckels, Josh; Rauch, Adam; Bellew, Matthew; Hussey, Peter; Ramsay, Sarah; Nathe, Cory; Lum, Karl; Krouse, Kevin; Stearns, David; Connolly, Brian; Skillman, Tom; Igra, Mark

    2011-03-09

    Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350 organizations. It tracks

  9. LabKey Server: An open source platform for scientific data integration, analysis and collaboration

    PubMed Central

    2011-01-01

    Background Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. Results To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350

  10. Medicaid care management: description of high-cost addictions treatment clients.

    PubMed

    Neighbors, Charles J; Sun, Yi; Yerneni, Rajeev; Tesiny, Ed; Burke, Constance; Bardsley, Leland; McDonald, Rebecca; Morgenstern, Jon

    2013-09-01

    High utilizers of alcohol and other drug treatment (AODTx) services are a priority for healthcare cost control. We examine characteristics of Medicaid-funded AODTx clients, comparing three groups: individuals <90th percentile of AODTx expenditures (n=41,054); high-cost clients in the top decile of AODTx expenditures (HC; n=5,718); and 1760 enrollees in a chronic care management (CM) program for HC clients implemented in 22 counties in New York State. Medicaid and state AODTx registry databases were combined to draw demographic, clinical, social needs and treatment history data. HC clients accounted for 49% of AODTx costs funded by Medicaid. As expected, HC clients had significant social welfare needs, comorbid medical and psychiatric conditions, and use of inpatient services. The CM program was successful in enrolling some high-needs, high-cost clients but faced barriers to reaching the most costly and disengaged individuals. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Identifying Students for Secondary and Tertiary Prevention Efforts: How Do We Determine Which Students Have Tier 2 and Tier 3 Needs?

    ERIC Educational Resources Information Center

    Lane, Kathleen Lynne; Oakes, Wendy Peia; Ennis, Robin Parks; Hirsch, Shanna Eisner

    2014-01-01

    In comprehensive, integrated, three-tiered models, it is essential to have a systematic method for identifying students who need supports at Tier 2 or Tier 3. This article provides explicit information on how to use multiple sources of data to determine which students might benefit from these supports. First, the authors provide an overview of how…

  12. Storage assignment optimization in a multi-tier shuttle warehousing system

    NASA Astrophysics Data System (ADS)

    Wang, Yanyan; Mou, Shandong; Wu, Yaohua

    2016-03-01

    The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP), which has been widely applied in the conventional automated storage and retrieval system(AS/RS). However, the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP. In this study, a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period (SWP) and lift idle period (LIP) during transaction cycle time. A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation. The decomposition method is applied to analyze the interactions among outbound task time, SWP, and LIP. The ant colony clustering algorithm is designed to determine storage partitions using clustering items. In addition, goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane. This combination is derived based on the analysis results of the queuing network model and on three basic principles. The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry. The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.

  13. MyDas, an Extensible Java DAS Server

    PubMed Central

    Jimenez, Rafael C.; Quinn, Antony F.; Jenkinson, Andrew M.; Mulder, Nicola; Martin, Maria; Hunter, Sarah; Hermjakob, Henning

    2012-01-01

    A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS) defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users. We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details. PMID:23028496

  14. MyDas, an extensible Java DAS server.

    PubMed

    Salazar, Gustavo A; García, Leyla J; Jones, Philip; Jimenez, Rafael C; Quinn, Antony F; Jenkinson, Andrew M; Mulder, Nicola; Martin, Maria; Hunter, Sarah; Hermjakob, Henning

    2012-01-01

    A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS) defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users.We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details.

  15. A network of web multimedia medical information servers for a medical school and university hospital.

    PubMed

    Denier, P; Le Beux, P; Delamarre, D; Fresnel, A; Cleret, M; Courtin, C; Seka, L P; Pouliquen, B; Cleran, L; Riou, C; Burgun, A; Jarno, P; Leduff, F; Lesaux, H; Duvauferrier, R

    1997-08-01

    Modern medicine requires a rapid access to information including clinical data from medical records, bibliographic databases, knowledge bases and nomenclature databases. This is especially true for University Hospitals and Medical Schools for training as well as for fundamental and clinical research for diagnosis and therapeutic purposes. This implies the development of local, national and international cooperation which can be enhanced via the use and access to computer networks such as Internet. The development of professional cooperative networks goes with the development of the telecommunication and computer networks and our project is to make these new tools and technologies accessible to the medical students both during the teaching time in Medical School and during the training periods at the University Hospital. We have developed a local area network which communicates between the School of Medicine and the Hospital which takes advantage of the new Web client-server technology both internally (Intranet) and externally by access to the National Research Network (RENATER in France) connected to the Internet network. The address of our public web server is http:(/)/www.med.univ-rennesl.fr.

  16. 20 CFR 226.11 - Employee tier II.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    .... The tier II of an employee annuity is based only on railroad service. For annuities awarded after... of the tier II benefit for age. The result cannot be less than zero. (c) If the railroad retirement...

  17. N2 production and fixation in deep-tier burrows of Squilla empusa in muddy sediments of Great Peconic Bay

    NASA Astrophysics Data System (ADS)

    Waugh, Stuart; Aller, Robert C.

    2017-11-01

    Global marine N budgets often show deficits due to dominance of benthic N2 production relative to pelagic N2 fixation. Recent studies have argued that benthic N2 fixation in shallow water environments has been underestimated. In particular, N2 fixation associated with animal burrows may be significant as indicated by high rates of N2 fixation reported in muddy sands populated by the ghost shrimp, Neotrypaea californiensis (Bertics et al., 2010). We investigated whether N2 fixation occurs at higher rates in the burrow-walls of the deep-burrowing ( 0.5-4 m) mantis shrimp, Squilla empusa, compared to ambient, estuarine muds and measured seasonal in-situ N2 concentrations in burrow-water relative to bottom-water. Acetylene reduction assays showed lower N2 fixation in burrow-walls than in un-populated sediments, likely due to inhibitory effects of O2 on ethylene production. Dissolved N2 was higher in burrow-water than proximate bottom-water at all seasons, demonstrating a consistent balance of net N2 production relative to fixation in deep-tier biogenic structures.

  18. Xerostomia among older home care clients.

    PubMed

    Viljakainen, Sari; Nykänen, Irma; Ahonen, Riitta; Komulainen, Kaija; Suominen, Anna Liisa; Hartikainen, Sirpa; Tiihonen, Miia

    2016-06-01

    The purpose of this study was to examine drug use and other factors associated with xerostomia in home care clients aged 75 years or older. The study sample included 270 home care clients aged ≥75 years living in Eastern and Central Finland. The home care clients underwent in-home interviews carried out by trained home care nurses, nutritionists, dental hygienists and pharmacists. The collected data contained information on sociodemographic factors, health and oral health status, drug use, depressive symptoms (GDS-15), cognitive functioning (MMSE), functional ability (Barthel Index, IADL) and nutrition (MNA). The primary outcome was xerostomia status (never, occasionally or continuously). Among the home care clients, 56% (n = 150) suffered from xerostomia. Persons with continuous xerostomia used more drugs and had more depressive symptoms and a higher number of comorbidities than other home care clients. In multivariate analyses, excessive polypharmacy (OR = 1.83, 95% Cl 1.08-3.10) and depressive symptoms (OR = 1.12, 95% Cl 1.03-1.22) were associated with xerostomia. Xerostomia is a common problem among old home care clients. Excessive polypharmacy, use of particular drug groups and depressive symptoms were associated with xerostomia. The findings support the importance of a multidisciplinary approach in the care of older home care clients. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Examining the Predictive Validity of a Dynamic Assessment of Decoding to Forecast Response Tier 2 to Intervention

    PubMed Central

    Cho, Eunsoo; Compton, Donald L.; Fuchs, Doug; Fuchs, Lynn S.; Bouton, Bobette

    2013-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small group tutoring in a response-to-intervention model. First-grade students (n=134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of 3 sets of variables: static decoding measures, Tier 1 responsiveness indicators, and pre-reading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% – 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. PMID:23213050

  20. Examining the predictive validity of a dynamic assessment of decoding to forecast response to tier 2 intervention.

    PubMed

    Cho, Eunsoo; Compton, Donald L; Fuchs, Douglas; Fuchs, Lynn S; Bouton, Bobette

    2014-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of three sets of variables: static decoding measures, Tier 1 responsiveness indicators, and prereading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% to 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. © Hammill Institute on Disabilities 2012.

  1. San Mateo County's Server Information Program (S.I.P.): A Community-Based Alcohol Server Training Program.

    ERIC Educational Resources Information Center

    de Miranda, John

    The field of alcohol server awareness and training has grown dramatically in the past several years and the idea of training servers to reduce alcohol problems has become a central fixture in the current alcohol policy debate. The San Mateo County, California Server Information Program (SIP) is a community-based prevention strategy designed to…

  2. Remote Adaptive Communication System

    DTIC Science & Technology

    2001-10-25

    manage several different devices using the software tool A. Client /Server Architecture The architecture we are proposing is based on the Client ...Server model (see figure 3). We want both client and server to be accessible from anywhere via internet. The computer, acting as a server, is in...the other hand, each of the client applications will act as sender or receiver, depending on the associated interface: user interface or device

  3. Effects of Counselor Gender and Gender-Role Orientation on Client Career Choice Traditionality.

    ERIC Educational Resources Information Center

    Barak, Azy; And Others

    1988-01-01

    Male (N=120) and female (N=120) clients were counseled by male or female counselor classified as masculine, feminine, or androgynous in sex-role orientation. Clients' career choice traditionality was measured during counseling, following counseling, and with respect to clients' career six months later. Counselor gender and gender-role orientation…

  4. Measurement of Energy Performances for General-Structured Servers

    NASA Astrophysics Data System (ADS)

    Liu, Ren; Chen, Lili; Li, Pengcheng; Liu, Meng; Chen, Haihong

    2017-11-01

    Energy consumption of servers in data centers increases rapidly along with the wide application of Internet and connected devices. To improve the energy efficiency of servers, voluntary or mandatory energy efficiency programs for servers, including voluntary label program or mandatory energy performance standards have been adopted or being prepared in the US, EU and China. However, the energy performance of servers and testing methods of servers are not well defined. This paper presents matrices to measure the energy performances of general-structured servers. The impacts of various components of servers on their energy performances are also analyzed. Based on a set of normalized workload, the author proposes a standard method for testing energy efficiency of servers. Pilot tests are conducted to assess the energy performance testing methods of servers. The findings of the tests are discussed in the paper.

  5. Hybrid Rendering with Scheduling under Uncertainty

    PubMed Central

    Tamm, Georg; Krüger, Jens

    2014-01-01

    As scientific data of increasing size is generated by today’s simulations and measurements, utilizing dedicated server resources to process the visualization pipeline becomes necessary. In a purely server-based approach, requirements on the client-side are minimal as the client only displays results received from the server. However, the client may have a considerable amount of hardware available, which is left idle. Further, the visualization is put at the whim of possibly unreliable server and network conditions. Server load, bandwidth and latency may substantially affect the response time on the client. In this paper, we describe a hybrid method, where visualization workload is assigned to server and client. A capable client can produce images independently. The goal is to determine a workload schedule that enables a synergy between the two sides to provide rendering results to the user as fast as possible. The schedule is determined based on processing and transfer timings obtained at runtime. Our probabilistic scheduler adapts to changing conditions by shifting workload between server and client, and accounts for the performance variability in the dynamic system. PMID:25309115

  6. Proposed Tier 2 Screening Criteria and Tier 3 Field Procedures for Evaluation of Vapor Intrusion (ESTCP Cost and Performance Report)

    DTIC Science & Technology

    2012-08-01

    Interstate Technology & Regulatory Council, Washington, DC, Copyright 2007. McHugh T.E., D.E. Hammond, T. Nickels , and B. Hartman. 2008. Use of...based corrective action have realized significant cost savings for their corrective action programs (Connor and McHugh , 2002). As described above...Groundwater (Tier 2) VOCs USEPA 8260B 40 mL VOA vial HCl 14 days Vapor (Tier 2 and Tier 3) Radon McHugh et al., 2008 500 mL Tedlar bag None 14

  7. 20 CFR 228.2 - Tier I and tier II annuity components.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Social Security Act if all of the employee's earnings after 1936 under both the railroad retirement system and the social security system had been creditable under the Social Security Act. (b) Tier II...

  8. Java-based cryptosystem for PACS and tele-imaging

    NASA Astrophysics Data System (ADS)

    Tjandra, Donny; Wong, Stephen T. C.; Yu, Yuan-Pin

    1998-07-01

    Traditional PACS systems are based on two-tier client server architectures, and require the use of costly, high-end client workstations for image viewing. Consequently, PACS systems using the two-tier architecture do not scale well as data increases in size and complexity. Furthermore, use of dedicated viewing workstations incurs costs in deployment and maintenance. To address these issues, the use of digital library technologies, such as the World Wide Web, Java, and CORBA, is being explored to distribute PACS data to serve a broader range of healthcare providers in an economic and efficient manner. Integration of PACS systems with digital library technologies allows access to medical information through open networks such as the Internet. However, use of open networks to transmit medical data introduces problems with maintaining privacy and integrity of patient information. Cryptography and digital timestamping is used to protect sensitive information from unauthorized access or tampering. A major concern when using cryptography and digital timestamping is the performance degradation associated with the mathematical calculations needed to encrypt/decrypt an image dataset, or to calculate the hash value of an image. The performance issue is compounded by the extra layer associated with the CORBA middleware, and the use of programming languages interpreted at the client side, such as Java. This paper study the extent to which Java-based cryptography and digital timestamping affects performance in a PACS system integrated with digital library technologies.

  9. User-Friendly Data Servers for Climate Studies at the Asia-Pacific Data-Research Center (APDRC)

    NASA Astrophysics Data System (ADS)

    Yuan, G.; Shen, Y.; Zhang, Y.; Merrill, R.; Waseda, T.; Mitsudera, H.; Hacker, P.

    2002-12-01

    The APDRC was recently established within the International Pacific Research Center (IPRC) at the University of Hawaii. The APDRC mission is to increase understanding of climate variability in the Asia-Pacific region by developing the computational, data-management, and networking infrastructure necessary to make data resources readily accessible and usable by researchers, and by undertaking data-intensive research activities that will both advance knowledge and lead to improvements in data preparation and data products. A focus of recent activity is the implementation of user-friendly data servers. The APDRC is currently running a Live Access Server (LAS) developed at NOAA/PMEL to provide access to and visualization of gridded climate products via the web. The LAS also allows users to download the selected data subsets in various formats (such as binary, netCDF and ASCII). Most of the datasets served by the LAS are also served through our OPeNDAP server (formerly DODS), which allows users to directly access the data using their desktop client tools (e.g. GrADS, Matlab and Ferret). In addition, the APDRC is running an OPeNDAP Catalog/Aggregation Server (CAS) developed by Unidata at UCAR to serve climate data and products such as model output and satellite-derived products. These products are often large (> 2 GB) and are therefore stored as multiple files (stored separately in time or in parameters). The CAS remedies the inconvenience of multiple files and allows access to the whole dataset (or any subset that cuts across the multiple files) via a single request command from any DODS enabled client software. Once the aggregation of files is configured at the server (CAS), the process of aggregation is transparent to the user. The user only needs to know a single URL for the entire dataset, which is, in fact, stored as multiple files. CAS even allows aggregation of files on different systems and at different locations. Currently, the APDRC is serving NCEP, ECMWF

  10. 38 CFR 36.4318 - Servicer tier ranking-temporary procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Servicer tier ranking... § 36.4318 Servicer tier ranking—temporary procedures. (a) The Secretary shall assign to each servicer a “Tier Ranking” based upon the servicer's performance in servicing guaranteed loans. There shall be four...

  11. 47 CFR 76.1618 - Basic tier availability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Basic tier availability. 76.1618 Section 76.1618 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1618 Basic tier availability. A cable operator...

  12. 47 CFR 76.1618 - Basic tier availability.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Basic tier availability. 76.1618 Section 76.1618 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1618 Basic tier availability. A cable operator...

  13. 47 CFR 76.1618 - Basic tier availability.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Basic tier availability. 76.1618 Section 76.1618 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1618 Basic tier availability. A cable operator...

  14. 47 CFR 76.1618 - Basic tier availability.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Basic tier availability. 76.1618 Section 76.1618 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1618 Basic tier availability. A cable operator...

  15. 47 CFR 76.1618 - Basic tier availability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Basic tier availability. 76.1618 Section 76.1618 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1618 Basic tier availability. A cable operator...

  16. National Medical Terminology Server in Korea

    NASA Astrophysics Data System (ADS)

    Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee

    Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.

  17. Dali server update.

    PubMed

    Holm, Liisa; Laakso, Laura M

    2016-07-08

    The Dali server (http://ekhidna2.biocenter.helsinki.fi/dali) is a network service for comparing protein structures in 3D. In favourable cases, comparing 3D structures may reveal biologically interesting similarities that are not detectable by comparing sequences. The Dali server has been running in various places for over 20 years and is used routinely by crystallographers on newly solved structures. The latest update of the server provides enhanced analytics for the study of sequence and structure conservation. The server performs three types of structure comparisons: (i) Protein Data Bank (PDB) search compares one query structure against those in the PDB and returns a list of similar structures; (ii) pairwise comparison compares one query structure against a list of structures specified by the user; and (iii) all against all structure comparison returns a structural similarity matrix, a dendrogram and a multidimensional scaling projection of a set of structures specified by the user. Structural superimpositions are visualized using the Java-free WebGL viewer PV. The structural alignment view is enhanced by sequence similarity searches against Uniprot. The combined structure-sequence alignment information is compressed to a stack of aligned sequence logos. In the stack, each structure is structurally aligned to the query protein and represented by a sequence logo. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. R5 clade C SHIV strains with tier 1 or 2 neutralization sensitivity: tools to dissect env evolution and to develop AIDS vaccines in primate models.

    PubMed

    Siddappa, Nagadenahalli B; Watkins, Jennifer D; Wassermann, Klemens J; Song, Ruijiang; Wang, Wendy; Kramer, Victor G; Lakhashe, Samir; Santosuosso, Michael; Poznansky, Mark C; Novembre, Francis J; Villinger, François; Else, James G; Montefiori, David C; Rasmussen, Robert A; Ruprecht, Ruth M

    2010-07-21

    HIV-1 clade C (HIV-C) predominates worldwide, and anti-HIV-C vaccines are urgently needed. Neutralizing antibody (nAb) responses are considered important but have proved difficult to elicit. Although some current immunogens elicit antibodies that neutralize highly neutralization-sensitive (tier 1) HIV strains, most circulating HIVs exhibiting a less sensitive (tier 2) phenotype are not neutralized. Thus, both tier 1 and 2 viruses are needed for vaccine discovery in nonhuman primate models. We constructed a tier 1 simian-human immunodeficiency virus, SHIV-1157ipEL, by inserting an "early," recently transmitted HIV-C env into the SHIV-1157ipd3N4 backbone [1] encoding a "late" form of the same env, which had evolved in a SHIV-infected rhesus monkey (RM) with AIDS. SHIV-1157ipEL was rapidly passaged to yield SHIV-1157ipEL-p, which remained exclusively R5-tropic and had a tier 1 phenotype, in contrast to "late" SHIV-1157ipd3N4 (tier 2). After 5 weekly low-dose intrarectal exposures, SHIV-1157ipEL-p systemically infected 16 out of 17 RM with high peak viral RNA loads and depleted gut CD4+ T cells. SHIV-1157ipEL-p and SHIV-1157ipd3N4 env genes diverge mostly in V1/V2. Molecular modeling revealed a possible mechanism for the increased neutralization resistance of SHIV-1157ipd3N4 Env: V2 loops hindering access to the CD4 binding site, shown experimentally with nAb b12. Similar mutations have been linked to decreased neutralization sensitivity in HIV-C strains isolated from humans over time, indicating parallel HIV-C Env evolution in humans and RM. SHIV-1157ipEL-p, the first tier 1 R5 clade C SHIV, and SHIV-1157ipd3N4, its tier 2 counterpart, represent biologically relevant tools for anti-HIV-C vaccine development in primates.

  19. The D3 Middleware Architecture

    NASA Technical Reports Server (NTRS)

    Walton, Joan; Filman, Robert E.; Korsmeyer, David J.; Lee, Diana D.; Mak, Ron; Patel, Tarang

    2002-01-01

    DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid-dynamics) model executions. DARWIN captures, stores and indexes data; manages derived knowledge (such as visualizations across multiple datasets); and provides an environment for designers to collaborate in the analysis of test results. DARWIN is an interesting application because it supports high-volumes of data. integrates multiple modalities of data display (e.g., images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and views of data. Here we provide an overview of the architecture of D3, the third generation of DARWIN. Earlier versions of DARWIN were characterized by browser-based interfaces and a hodge-podge of server technologies: CGI scripts, applets, PERL, and so forth. But browsers proved difficult to control, and a proliferation of computational mechanisms proved inefficient and difficult to maintain. D3 substitutes a pure-Java approach for that medley: A Java client communicates (though RMI over HTTPS) with a Java-based application server. Code on the server accesses information from JDBC databases, distributed LDAP security services, and a collaborative information system. D3 is a three tier-architecture, but unlike 'E-commerce' applications, the data usage pattern suggests different strategies than traditional Enterprise Java Beans - we need to move volumes of related data together, considerable processing happens on the client, and the 'business logic' on the server-side is primarily data integration and collaboration. With D3, we are extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user

  20. PEM public key certificate cache server

    NASA Astrophysics Data System (ADS)

    Cheung, T.

    1993-12-01

    Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.

  1. Cleavage-Independent HIV-1 Trimers From CHO Cell Lines Elicit Robust Autologous Tier 2 Neutralizing Antibodies

    PubMed Central

    Bale, Shridhar; Martiné, Alexandra; Wilson, Richard; Behrens, Anna-Janina; Le Fourn, Valérie; de Val, Natalia; Sharma, Shailendra K.; Tran, Karen; Torres, Jonathan L.; Girod, Pierre-Alain; Ward, Andrew B.; Crispin, Max; Wyatt, Richard T.

    2018-01-01

    Native flexibly linked (NFL) HIV-1 envelope glycoprotein (Env) trimers are cleavage-independent and display a native-like, well-folded conformation that preferentially displays broadly neutralizing determinants. The NFL platform simplifies large-scale production of Env by eliminating the need to co-transfect the precursor-cleaving protease, furin that is required by the cleavage-dependent SOSIP trimers. Here, we report the development of a CHO-M cell line that expressed BG505 NFL trimers at a high level of homogeneity and yields of ~1.8 g/l. BG505 NFL trimers purified by single-step lectin-affinity chromatography displayed a native-like closed structure, efficient recognition by trimer-preferring bNAbs, no recognition by non-neutralizing CD4 binding site-directed and V3-directed antibodies, long-term stability, and proper N-glycan processing. Following negative-selection, formulation in ISCOMATRIX adjuvant and inoculation into rabbits, the trimers rapidly elicited potent autologous tier 2 neutralizing antibodies. These antibodies targeted the N-glycan “hole” naturally present on the BG505 Env proximal to residues at positions 230, 241, and 289. The BG505 NFL trimers that did not expose V3 in vitro, elicited low-to-no tier 1 virus neutralization in vivo, indicating that they remained intact during the immunization process, not exposing V3. In addition, BG505 NFL and BG505 SOSIP trimers expressed from 293F cells, when formulated in Adjuplex adjuvant, elicited equivalent BG505 tier 2 autologous neutralizing titers. These titers were lower in potency when compared to the titers elicited by CHO-M cell derived trimers. In addition, increased neutralization of tier 1 viruses was detected. Taken together, these data indicate that both adjuvant and cell-type expression can affect the elicitation of tier 2 and tier 1 neutralizing responses in vivo.

  2. Just healthcare? The moral failure of single-tier basic healthcare.

    PubMed

    Meadowcroft, John

    2015-04-01

    This article sets out the moral failure of single-tier basic healthcare. Single-tier basic healthcare has been advocated on the grounds that the provision of healthcare should be divorced from ability to pay and unequal access to basic healthcare is morally intolerable. However, single-tier basic healthcare encounters a host of catastrophic moral failings. Given the fact of human pluralism it is impossible to objectively define "basic" healthcare. Attempts to provide single-tier healthcare therefore become political processes in which interest groups compete for control of scarce resources with the most privileged possessing an inherent advantage. The focus on outputs in arguments for single-tier provision neglects the question of justice between individuals when some people provide resources for others without reciprocal benefits. The principle that only healthcare that can be provided to everyone should be provided at all leads to a leveling-down problem in which advocates of single-tier provision must prefer a situation where some individuals are made worse-off without any individual being made better-off compared to plausible multi-tier alternatives. Contemporary single-tier systems require the exclusion of noncitizens, meaning that their universalism is a myth. In the light of these pathologies, it is judged that multi-tier healthcare is morally required. © The Author 2015. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Status and Trends in Networking at LHC Tier1 Facilities

    NASA Astrophysics Data System (ADS)

    Bobyshev, A.; DeMar, P.; Grigaliunas, V.; Bigrow, J.; Hoeft, B.; Reymund, A.

    2012-12-01

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization

  4. Effects of Tier 3 Intervention for Students With Persistent Reading Difficulties and Characteristics of Inadequate Responders

    PubMed Central

    Denton, Carolyn A.; Tolar, Tammy D.; Fletcher, Jack M.; Barth, Amy E.; Vaughn, Sharon; Francis, David J.

    2013-01-01

    This article describes a randomized controlled trial conducted to evaluate the effects of an intensive, individualized, Tier 3 reading intervention for second grade students who had previously experienced inadequate response to quality first grade classroom reading instruction (Tier 1) and supplemental small-group intervention (Tier 2). Also evaluated were cognitive characteristics of students with inadequate response to intensive Tier 3 intervention. Students were randomized to receive the research intervention (N = 47) or the instruction and intervention typically provided in their schools (N = 25). Results indicated that students who received the research intervention made significantly better growth than those who received typical school instruction on measures of word identification, phonemic decoding, and word reading fluency and on a measure of sentence- and paragraph-level reading comprehension. Treatment effects were smaller and not statistically significant on phonemic decoding efficiency, text reading fluency, and reading comprehension in extended text. Effect sizes for all outcomes except oral reading fluency met criteria for substantive importance; however, many of the students in the intervention continued to struggle. An evaluation of cognitive profiles of adequate and inadequate responders was consistent with a continuum of severity (as opposed to qualitative differences), showing greater language and reading impairment prior to the intervention in students who were inadequate responders. PMID:25308995

  5. Effects of Tier 3 Intervention for Students With Persistent Reading Difficulties and Characteristics of Inadequate Responders.

    PubMed

    Denton, Carolyn A; Tolar, Tammy D; Fletcher, Jack M; Barth, Amy E; Vaughn, Sharon; Francis, David J

    2013-08-01

    This article describes a randomized controlled trial conducted to evaluate the effects of an intensive, individualized, Tier 3 reading intervention for second grade students who had previously experienced inadequate response to quality first grade classroom reading instruction (Tier 1) and supplemental small-group intervention (Tier 2). Also evaluated were cognitive characteristics of students with inadequate response to intensive Tier 3 intervention. Students were randomized to receive the research intervention ( N = 47) or the instruction and intervention typically provided in their schools ( N = 25). Results indicated that students who received the research intervention made significantly better growth than those who received typical school instruction on measures of word identification, phonemic decoding, and word reading fluency and on a measure of sentence- and paragraph-level reading comprehension. Treatment effects were smaller and not statistically significant on phonemic decoding efficiency, text reading fluency, and reading comprehension in extended text. Effect sizes for all outcomes except oral reading fluency met criteria for substantive importance; however, many of the students in the intervention continued to struggle. An evaluation of cognitive profiles of adequate and inadequate responders was consistent with a continuum of severity (as opposed to qualitative differences), showing greater language and reading impairment prior to the intervention in students who were inadequate responders.

  6. Evaluation of a liaison librarian program: client and liaison perspectives.

    PubMed

    Tennant, Michele R; Cataldo, Tara Tobin; Sherwill-Navarro, Pamela; Jesano, Rae

    2006-10-01

    This paper describes a survey-based evaluation of the five-year old Liaison Librarian Program at the University of Florida. Liaison librarians, faculty, students, staff, residents, and post-doctoral associates were queried via Web-based surveys. Questions addressed client and liaison perspectives on a variety of issues, including program and service awareness and usage, client-library relations and communication, client support for the program, and liaison workload. Approximately 43% of the 323 client respondents were aware of liaison services; 72% (n = 163) of these clients had had contact with their liaison. Ninety-five percent (n = 101) of faculty and students who reported contact with their liaison supported the continuation of the program. Liaison services were used by a greater percentage of faculty than students, although they had similar patterns of usage and reported the same "traditional" services to be most important. Liaisons indicated that communications with clients had increased, the reputation of the library was enhanced, and their workloads had increased as a result of the Liaison Librarian Program. Survey results suggest that the Liaison Librarian Program has a core set of clients who use and highly value the services provided by liaisons. Recommendations addressing workload, training, marketing, and administrative support are provided.

  7. 26 CFR 1.444-4 - Tiered structure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Tiered structure. 1.444-4 Section 1.444-4 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Accounting Periods § 1.444-4 Tiered structure. (a) Electing small business trusts. For...

  8. 47 CFR 76.1605 - New product tier.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false New product tier. 76.1605 Section 76.1605 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1605 New product tier. (a) Within 30 days of the offering of an...

  9. Expanding clarity or confusion? Volatility of the 5-tier ratings assessing quality of transplant centers in the United States.

    PubMed

    Schold, Jesse D; Andreoni, Kenneth A; Chandraker, Anil K; Gaston, Robert S; Locke, Jayme E; Mathur, Amit K; Pruett, Timothy L; Rana, Abbas; Ratner, Lloyd E; Buccini, Laura D

    2018-06-01

    Outcomes of patients receiving solid organ transplants in the United States are systematically aggregated into bi-annual Program-Specific Reports (PSRs) detailing risk-adjusted survival by transplant center. Recently, the Scientific Registry of Transplant Recipients (SRTR) issued 5-tier ratings evaluating centers based on risk-adjusted 1-year graft survival. Our primary aim was to examine the reliability of 5-tier ratings over time. Using 10 consecutive PSRs for adult kidney transplant centers from June 2012 to December 2016 (n = 208), we applied 5-tier ratings to center outcomes and evaluated ratings over time. From the baseline period (June 2012), 47% of centers had at least a 1-unit tier change within 6 months, 66% by 1 year, and 94% by 3 years. Similarly, 46% of centers had at least a 2-unit tier change by 3 years. In comparison, 15% of centers had a change in the traditional 3-tier rating at 3 years. The 5-tier ratings at 4 years had minimal association with baseline rating (Kappa 0.07, 95% confidence interval [CI] -0.002 to 0.158). Centers had a median of 3 different 5-tier ratings over the period (q1 = 2, q3 = 4). Findings were consistent for center volume, transplant rate, and baseline 5-tier rating. Cumulatively, results suggest that 5-tier ratings are highly volatile, limiting their utility for informing potential stakeholders, particularly transplant candidates given expected waiting times between wait listing and transplantation. © 2018 The American Society of Transplantation and the American Society of Transplant Surgeons.

  10. Planning for complementarity : an examination of the role and opportunities of first-tier and second-tier cities along the high-speed rail network in California.

    DOT National Transportation Integrated Search

    2012-03-01

    The coming of California High-Speed Rail (HSR) offers opportunities for positive urban transformations in both first-tier and second-tier cities. The research in this report explores the different but complementary roles that first-tier and second-ti...

  11. Tiers of intervention in kindergarten through third grade.

    PubMed

    O'Connor, Rollanda E; Harty, Kristin R; Fulmer, Deborah

    2005-01-01

    This study measured the effects of increasing levels of intervention in reading for a cohort of children in Grades K through 3 to determine whether the severity of reading disability (RD) could be significantly reduced in the catchment schools. Tier 1 consisted of professional development for teachers of reading. The focus of this study is on additional instruction that was provided as early as kindergarten for children whose achievement fell below average. Tier 2 intervention consisted of small-group reading instruction 3 times per week, and Tier 3 of daily instruction delivered individually or in groups of two. A comparison of the reading achievement of third-grade children who were at risk in kindergarten showed moderate to large differences favoring children in the tiered interventions in decoding, word identification, fluency, and reading comprehension.

  12. Development of two measures of client engagement for use in home aged care.

    PubMed

    Baker, Jess Rose; Harrison, Fleur; Low, Lee-Fay

    2016-05-01

    The aim of the study was to develop and validate measures of client engagement in aged homecare. The Homecare Measure of Engagement-Staff questionnaire (HoME-S) is a self-complete measure of six dimensions of client engagement: client acceptance, attention, attitude, appropriateness, engagement duration and passivity. The Homecare Measure of Engagement-Client/Family report (HoME-CF) is a researcher-rated interview which obtains client and/or family perspectives regarding frequency and valence of conversational and recreational engagement during care worker visits. Care workers (n = 84) completed the HoME-S and a measure of relationship bond with client. Researchers interviewed clients (n = 164) and/or their family (n = 117) and completed the HoME-CF, and measures of agitation, dysphoria, apathy and cognitive functioning. The HoME-S and HoME-CF demonstrated good test-retest and inter-rater reliability, and showed significant negative correlations with apathy, agitation and non-English-speaking background. Controlling for client and care service characteristics, a stronger care worker-client relationship bond and English-speaking background were independently associated with higher HoME-S scores, and apathy was independently associated with higher HoME-CF scores. In conclusion, the HoME-S and HoME-CF are psychometrically sound engagement measures for use in homecare. Clients who are apathetic or from non-English-speaking backgrounds may be less responsive to traditional care worker engagement strategies. Engagement may be augmented in clients who have stronger relationships with their care workers. © 2015 John Wiley & Sons Ltd.

  13. 75 FR 73166 - Publication of the Tier 2 Tax Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-29

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service, Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  14. 76 FR 71623 - Publication of the Tier 2 Tax Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-18

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  15. Perceived Counselor Characteristics, Client Expectations, and Client Satisfaction with Counseling.

    ERIC Educational Resources Information Center

    Heppner, P. Paul; Heesacker, Martin

    1983-01-01

    Examined interpersonal influence process within counseling including relationship between perceived counselor expertness, attractiveness, and trustworthiness and client satisfaction; between client expectations on perceived counselor expertness, attractiveness, trustworthiness, and client satisfaction; and effects of actual counselor experience…

  16. SETTER: web server for RNA structure comparison

    PubMed Central

    Čech, Petr; Svozil, Daniel; Hoksza, David

    2012-01-01

    The recent discoveries of regulatory non-coding RNAs changed our view of RNA as a simple information transfer molecule. Understanding the architecture and function of active RNA molecules requires methods for comparing and analyzing their 3D structures. While structural alignment of short RNAs is achievable in a reasonable amount of time, large structures represent much bigger challenge. Here, we present the SETTER web server for the RNA structure pairwise comparison utilizing the SETTER (SEcondary sTructure-based TERtiary Structure Similarity Algorithm) algorithm. The SETTER method divides an RNA structure into the set of non-overlapping structural elements called generalized secondary structure units (GSSUs). The SETTER algorithm scales as O(n2) with the size of a GSSUs and as O(n) with the number of GSSUs in the structure. This scaling gives SETTER its high speed as the average size of the GSSU remains constant irrespective of the size of the structure. However, the favorable speed of the algorithm does not compromise its accuracy. The SETTER web server together with the stand-alone implementation of the SETTER algorithm are freely accessible at http://siret.cz/setter. PMID:22693209

  17. A Scalability Model for ECS's Data Server

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Singhal, Mukesh

    1998-01-01

    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  18. Performance of a distributed superscalar storage server

    NASA Technical Reports Server (NTRS)

    Finestead, Arlan; Yeager, Nancy

    1993-01-01

    The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.

  19. The research and implementation of coalfield spontaneous combustion of carbon emission WebGIS based on Silverlight and ArcGIS server

    NASA Astrophysics Data System (ADS)

    Zhu, Z.; Bi, J.; Wang, X.; Zhu, W.

    2014-02-01

    As an important sub-topic of the natural process of carbon emission data public information platform construction, coalfield spontaneous combustion of carbon emission WebGIS system has become an important study object. In connection with data features of coalfield spontaneous combustion carbon emissions (i.e. a wide range of data, which is rich and complex) and the geospatial characteristics, data is divided into attribute data and spatial data. Based on full analysis of the data, completed the detailed design of the Oracle database and stored on the Oracle database. Through Silverlight rich client technology and the expansion of WCF services, achieved the attribute data of web dynamic query, retrieval, statistical, analysis and other functions. For spatial data, we take advantage of ArcGIS Server and Silverlight-based API to invoke GIS server background published map services, GP services, Image services and other services, implemented coalfield spontaneous combustion of remote sensing image data and web map data display, data analysis, thematic map production. The study found that the Silverlight technology, based on rich client and object-oriented framework for WCF service, can efficiently constructed a WebGIS system. And then, combined with ArcGIS Silverlight API to achieve interactive query attribute data and spatial data of coalfield spontaneous emmission, can greatly improve the performance of WebGIS system. At the same time, it provided a strong guarantee for the construction of public information on China's carbon emission data.

  20. Towards optimizing server performance in an educational MMORPG for teaching computer programming

    NASA Astrophysics Data System (ADS)

    Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios

    2013-10-01

    Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.

  1. Design and evaluation of web-based image transmission and display with different protocols

    NASA Astrophysics Data System (ADS)

    Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo

    2011-03-01

    There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.

  2. The ATLAS Tier-3 in Geneva and the Trigger Development Facility

    NASA Astrophysics Data System (ADS)

    Gadomski, S.; Meunier, Y.; Pasche, P.; Baud, J.-P.; ATLAS Collaboration

    2011-12-01

    The ATLAS Tier-3 farm at the University of Geneva provides storage and processing power for analysis of ATLAS data. In addition the facility is used for development, validation and commissioning of the High Level Trigger of ATLAS [1]. The latter purpose leads to additional requirements on the availability of latest software and data, which will be presented. The farm is also a part of the WLCG [2], and is available to all members of the ATLAS Virtual Organization. The farm currently provides 268 CPU cores and 177 TB of storage space. A grid Storage Element, implemented with the Disk Pool Manager software [3], is available and integrated with the ATLAS Distributed Data Management system [4]. The batch system can be used directly by local users, or with a grid interface provided by NorduGrid ARC middleware [5]. In this article we will present the use cases that we support, as well as the experience with the software and the hardware we are using. Results of I/O benchmarking tests, which were done for our DPM Storage Element and for the NFS servers we are using, will also be presented.

  3. The USGODAE Monterey Data Server

    NASA Astrophysics Data System (ADS)

    Sharfstein, P.; Dimitriou, D.; Hankin, S.

    2005-12-01

    The USGODAE Monterey Data Server (http://www.usgodae.org/) has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. The server is operated with oversight and funding from the Office of Naval Research (ONR). Support of the GODAE Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the GODAE server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data available from the Global Telecommunications System (GTS) and other FTP sites, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. It supports GODAE participants, as well as the broader oceanographic research community, and is becoming a significant node in the international GODAE program. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Presenting data with a consistent interface and ensuring its availability in the maximum number of standard formats is one of the primary challenges in hosting the many diverse formats and broad range of data used by the GODAE community. To this end, all USGODAE data sets are available in their original format via HTTP and FTP. In

  4. Introduction of non-invasive prenatal testing as a first-tier aneuploidy screening test: A survey among Dutch midwives about their role as counsellors.

    PubMed

    Martin, Linda; Gitsels-van der Wal, Janneke T; de Boer, Marjon A; Vanstone, Meredith; Henneman, Lidewij

    2018-01-01

    In 2014, non-invasive prenatal testing (NIPT) for trisomies 21, 18 and 13 was added to the Dutch prenatal screening program as part of the TRIDENT study. Most (85%) pregnant Dutch women are counselled for prenatal aneuploidy screening by primary care midwives. This will remain when NIPT is implemented as a first-tier screening test. We therefore investigated midwife counsellors': 1) Knowledge about NIPT; 2) Attitudes towards NIPT as first-tier screening test; and 3) Experiences with informing clients about NIPT. Between April-June 2015, an online questionnaire to assess knowledge about NIPT, attitudes towards NIPT, and experiences with NIPT was completed by 436 Dutch primary care midwives. We found that 59% midwives answered ≥7 of 8 knowledge questions correctly. Continuing professional education attendance and more positive attitudes towards prenatal screening for Down syndrome were positively associated with the total knowledge score (β = 0.261; p = 0.007 and β = 0.204; p = 0.015, respectively). The majority (67%) were in favor of replacing First trimester Combined Test with NIPT, although 41% preferred to maintain a nuchal translucency measurement alongside NIPT. We conclude that midwives demonstrated solid knowledge about NIPT that may still be improved in some areas. Dutch midwives overwhelmingly support the integration of NIPT as a first-tier screening test. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Multi-tiered system of support incorporating the R.E.N.E.W. process and its relationship to perception of school safety and office discipline referrals

    NASA Astrophysics Data System (ADS)

    Flood, Molly M.

    This study examined the relationship between the fidelity of multi-tier school-wide positive behavior interventions and supports (SWPBIS) and staff perception of school safety and office discipline referrals. This research provided a case study on multi-tier supports and interventions, and the RENEW person-centered planning process in an alternative special education center following the implementation of a multi-tier SWPBIS model. Pennsylvania is one of several states looking to adopt an effective Tier III behavioral tool. The research described the results of an analysis of implementation fidelity on a multi-tiered school-wide positive behavior support model developed at a special education center operated by a public school system entity. This research explored the fidelity of SWPBIS implementation; analyzed the relationship of SWPBIS to school climate as measured by staff perceptions and reduction of office discipline referrals (ODR); explored tier III supports incorporating a process Rehabilitation and Empowerment, Natural Supports, Education and Work (RENEW); and investigated the potential sustainability of the RENEW process as a multi-tier system of support. This study investigated staff perceptions on integrated supports between schools and communities and identified the degree of relationship to school risk factors, school protective factors, and office discipline referrals following the building of cooperative partnerships between Systems of Care and Local Education Agencies.

  6. Genetic and economic benefits of selection based on performance recording and genotyping in lower tiers of multi-tiered sheep breeding schemes.

    PubMed

    Santos, Bruno F S; van der Werf, Julius H J; Gibson, John P; Byrne, Timothy J; Amer, Peter R

    2017-01-17

    Performance recording and genotyping in the multiplier tier of multi-tiered sheep breeding schemes could potentially reduce the difference in the average genetic merit between nucleus and commercial flocks, and create additional economic benefits for the breeding structure. The genetic change in a multiple-trait breeding objective was predicted for various selection strategies that included performance recording, parentage testing and genomic selection. A deterministic simulation model was used to predict selection differentials and the flow of genetic superiority through the different tiers. Cumulative discounted economic benefits were calculated based on trait gains achieved in each of the tiers and considering the extra revenue and associated costs of applying recording, genotyping and selection practices in the multiplier tier of the breeding scheme. Performance recording combined with genomic or parentage information in the multiplier tier reduced the genetic lag between the nucleus and commercial flock by 2 to 3 years. The overall economic benefits of improved performance in the commercial tier offset the costs of recording the multiplier. However, it took more than 18 years before the cumulative net present value of benefits offset the costs at current test prices. Strategies in which recorded multiplier ewes were selected as replacements for the nucleus flock did modestly increase profitability when compared to a closed nucleus structure. Applying genomic selection is the most beneficial strategy if testing costs can be reduced or by genotyping only a proportion of the selection candidates. When the cost of genotyping was reduced, scenarios that combine performance recording with genomic selection were more profitable and reached breakeven point about 10 years earlier. Economic benefits can be generated in multiplier flocks by implementing performance recording in conjunction with either DNA pedigree recording or genomic technology. These recording

  7. A client-treatment matching protocol for therapeutic communities: first report.

    PubMed

    Melnick, G; De Leon, G; Thomas, G; Kressel, D

    2001-10-01

    The present study is the first report on a client-treatment matching protocol (CMP) to guide admissions to residential and outpatient substance abuse treatment settings. Two cohorts, a field test sample (n = 318) and cross-validation (n = 407) sample were drawn from consecutive admissions to nine geographically distributed multisetting therapeutic communities (TCs). A passive matching design was employed. Clients received the CMP on admission, but agencies were "blind" to the CMP treatment recommendation (i.e., match) and assigned clients to treatment by the usual intake procedures. Bivariate and logistical regression analyses show that positive treatment dispositions (treatment completion or longer retention in treatment)) were significantly higher among the CMP-matched clients. The present findings provide the empirical basis for studies assessing the validity and utility of the CMP with controlled designs. Though limited to TC-oriented agencies, the present research supports the use of objective matching criteria to improve treatment.

  8. The EBI SRS server-new features.

    PubMed

    Zdobnov, Evgeny M; Lopez, Rodrigo; Apweiler, Rolf; Etzold, Thure

    2002-08-01

    Here we report on recent developments at the EBI SRS server (http://srs.ebi.ac.uk). SRS has become an integration system for both data retrieval and sequence analysis applications. The EBI SRS server is a primary gateway to major databases in the field of molecular biology produced and supported at EBI as well as European public access point to the MEDLINE database provided by US National Library of Medicine (NLM). It is a reference server for latest developments in data and application integration. The new additions include: concept of virtual databases, integration of XML databases like the Integrated Resource of Protein Domains and Functional Sites (InterPro), Gene Ontology (GO), MEDLINE, Metabolic pathways, etc., user friendly data representation in 'Nice views', SRSQuickSearch bookmarklets. SRS6 is a licensed product of LION Bioscience AG freely available for academics. The EBI SRS server (http://srs.ebi.ac.uk) is a free central resource for molecular biology data as well as a reference server for the latest developments in data integration.

  9. The ATLAS Tier-0: Overview and operational experience

    NASA Astrophysics Data System (ADS)

    Elsing, Markus; Goossens, Luc; Nairz, Armin; Negri, Guido

    2010-04-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.

  10. Influence of Client Intelligence Quotient Scores on Placement Recommendations: An Analogue Study.

    ERIC Educational Resources Information Center

    Ursprung, Alex William

    1987-01-01

    Examined influence of client intelligence quotient (IQ) scores on placement recommendations by rehabilitation students (N=59). Clients' IQ scores were found to influence significantly number of placement options generated by the participants, but had no significant impact on prediction of vocational success or willingness to work with the client.…

  11. The Legnaro-Padova distributed Tier-2: challenges and results

    NASA Astrophysics Data System (ADS)

    Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola

    2014-06-01

    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to

  12. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  13. An extensible and lightweight architecture for adaptive server applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorton, Ian; Liu, Yan; Trivedi, Nihar

    2008-07-10

    Server applications augmented with behavioral adaptation logic can react to environmental changes, creating self-managing server applications with improved quality of service at runtime. However, developing adaptive server applications is challenging due to the complexity of the underlying server technologies and highly dynamic application environments. This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications. ASF provides a clear separation between the implementation of adaptive behavior and the business logic of the server application. This means a server application can be extended with programmable adaptive features through the definitionmore » and implementation of control components defined in ASF. Furthermore, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the erformance gains possible by adaptive behavior and the low overhead introduced by ASF.« less

  14. Tier 2 Interventions in Positive Behavior Support: A Survey of School Implementation

    ERIC Educational Resources Information Center

    Rodriguez, Billie Jo; Loman, Sheldon L.; Borgmeier, Christopher

    2016-01-01

    As increasing numbers of schools implement Multi-Tiered Systems of Support (MTSS), schools are looking for and implementing evidence-based practices for students whose needs are not fully met by Tier 1 supports. Although there is relative consistency and clarity in what constitutes Tier 1 behavior support within MTSS, Tier 2 supports may be more…

  15. Improving general practice based epidemiologic surveillance using desktop clients: the French Sentinel Network experience.

    PubMed

    Turbelin, Clément; Boëlle, Pierre-Yves

    2010-01-01

    Web-based applications are a choice tool for general practice based epidemiological surveillance; however their use may disrupt the general practitioners (GPs) work process. In this article, we propose an alternative approach based on a desktop client application. This was developed for use in the French General Practitioners Sentinel Network. We developed a java application running as a client on the local GP computer. It allows reporting cases to a central server and provides feedback to the participating GPs. XML was used to describe surveillance protocols and questionnaires as well as instances of case descriptions. An evaluation of the users' feelings was carried out and the impact on the timeliness and completeness of surveillance data was measured. Better integration in the work process was reported, especially when the software was used at the time of consultation. Reports were received more frequently with less missing data. This study highlights the potential of allowing multiple ways of interaction with the surveillance system to increase participation of GPs and the quality of surveillance.

  16. Server-Side Includes Made Simple.

    ERIC Educational Resources Information Center

    Fagan, Jody Condit

    2002-01-01

    Describes server-side include (SSI) codes which allow Webmasters to insert content into Web pages without programming knowledge. Explains how to enable the codes on a Web server, provides a step-by-step process for implementing them, discusses tags and syntax errors, and includes examples of their use on the Web site for Southern Illinois…

  17. TBI server: a web server for predicting ion effects in RNA folding.

    PubMed

    Zhu, Yuhong; He, Zhaojian; Chen, Shi-Jie

    2015-01-01

    Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects. The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects. By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  18. Research and Design of the Three-tier Distributed Network Management System Based on COM / COM + and DNA

    NASA Astrophysics Data System (ADS)

    Liang, Likai; Bi, Yushen

    Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.

  19. Tier2 Submit Software

    EPA Pesticide Factsheets

    Download this tool for Windows or Mac, which helps facilities prepare a Tier II electronic chemical inventory report. The data can also be exported into the CAMEOfm (Computer-Aided Management of Emergency Operations) emergency planning software.

  20. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    NASA Astrophysics Data System (ADS)

    Jain, Madhu; Meena, Rakesh Kumar

    2018-03-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  1. Differences between Mutual and Client-Intiated Nonmutual Terminations in a University Counseling Center.

    ERIC Educational Resources Information Center

    Cochran, Sam V.; Stamler, Virginia Lee

    1989-01-01

    Examined differences in satisfaction with counseling between clients (N=52) who initiated termination of treatment without discussing termination with their counselors and clients (N=146) who mutually planned termination with counselors. Results revealed that nonmutual terminators perceived counseling less positively and reported different reasons…

  2. A Five-Tier System for Improving the Categorization of Transplant Program Performance.

    PubMed

    Wey, Andrew; Salkowski, Nicholas; Kasiske, Bertram L; Israni, Ajay K; Snyder, Jon J

    2018-06-01

    To better inform health care consumers by better identifying differences in transplant program performance. Adult kidney transplants performed in the United States, January 1, 2012-June 30, 2014. In December 2016, the Scientific Registry of Transplant Recipients instituted a five-tier system for reporting transplant program performance. We compare the differentiation of program performance and the simulated misclassification rate of the five-tier system with the previous three-tier system based on the 95 percent credible interval. Scientific Registry of Transplant Recipients database. The five-tier system improved differentiation and maintained a low misclassification rate of less than 22 percent for programs differing by two tiers. The five-tier system will better inform health care consumers of transplant program performance. © Health Research and Educational Trust.

  3. Virtual network computing: cross-platform remote display and collaboration software.

    PubMed

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  4. A single, continuous metric to define tiered serum neutralization potency against HIV

    DOE PAGES

    Hraber, Peter Thomas; Korber, Bette Tina Marie; Wagh, Kshitij; ...

    2018-01-19

    HIV-1 Envelope (Env) variants are grouped into tiers by their neutralization-sensitivity phenotype. This helped to recognize that tier 1 neutralization responses can be elicited readily, but do not protect against new infections. Tier 3 viruses are the least sensitive to neutralization. Because most circulating viruses are tier 2, vaccines that elicit neutralization responses against them are needed. While tier classification is widely used for viruses, a way to rate serum or antibody neutralization responses in comparable terms is needed. Logistic regression of neutralization outcomes summarizes serum or antibody potency on a continuous, tier-like scale. It also tests significance of themore » neutralization score, to indicate cases where serum response does not depend on virus tiers. The method can standardize results from different virus panels, and could lead to high-throughput assays, which evaluate a single serum dilution, rather than a dilution series, for more efficient use of limited resources to screen samples from vaccinees.« less

  5. A single, continuous metric to define tiered serum neutralization potency against HIV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hraber, Peter Thomas; Korber, Bette Tina Marie; Wagh, Kshitij

    HIV-1 Envelope (Env) variants are grouped into tiers by their neutralization-sensitivity phenotype. This helped to recognize that tier 1 neutralization responses can be elicited readily, but do not protect against new infections. Tier 3 viruses are the least sensitive to neutralization. Because most circulating viruses are tier 2, vaccines that elicit neutralization responses against them are needed. While tier classification is widely used for viruses, a way to rate serum or antibody neutralization responses in comparable terms is needed. Logistic regression of neutralization outcomes summarizes serum or antibody potency on a continuous, tier-like scale. It also tests significance of themore » neutralization score, to indicate cases where serum response does not depend on virus tiers. The method can standardize results from different virus panels, and could lead to high-throughput assays, which evaluate a single serum dilution, rather than a dilution series, for more efficient use of limited resources to screen samples from vaccinees.« less

  6. SERVER DEVELOPMENT FOR NSLS-II PHYSICS APPLICATIONS AND PERFORMANCE ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, G.; Kraimer, M.

    2011-03-28

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. The server software under development is available via an open source sourceforge project named epics-pvdata, which consists of modules pvData, pvAccess, pvIOC, and pvService. Examples of two services that already exist in the pvService module are itemFinder, and gather. Each service uses pvData to store in-memory transient data, pvService to transfer data over the network, and pvIOC as the service engine. The performance benchmarking for pvAccess and both gather service and item finder service are presented inmore » this paper. The performance comparison between pvAccess and Channel Access are presented also. For an ultra low emittance synchrotron radiation light source like NSLS II, the control system requirements, especially for beam control are tight. To control and manipulate the beam effectively, a use case study has been performed to satisfy the requirement and theoretical evaluation has been performed. The analysis shows that model based control is indispensable for beam commissioning and routine operation. However, there are many challenges such as how to re-use a design model for on-line model based control, and how to combine the numerical methods for modeling of a realistic lattice with the analytical techniques for analysis of its properties. To satisfy the requirements and challenges, adequate system architecture for the software framework for beam commissioning and operation is critical. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating and plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing

  7. Hierarchical video summarization based on context clustering

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Smith, John R.

    2003-11-01

    A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

  8. "You've Gotta Keep the Customer Satisfied": Assessing Client Satisfaction.

    ERIC Educational Resources Information Center

    Andert, Jeffery N.; And Others

    To better understand factors contributing to an identified early attrition rate for families referred to a child guidance clinic, a procedure was developed for assessing their satisfaction with clinic services. Brief Client Satisfaction Questionnaires (N=3) were developed to assess clients' attitudes and reactions to an initial screening and…

  9. Which Tier? Effects of Linear Assessment and Student Characteristics on GCSE Entry Decisions

    ERIC Educational Resources Information Center

    Vitello, Sylvia; Crawford, Cara

    2018-01-01

    In England, students obtain General Certificate of Secondary Education (GCSE) qualifications, typically at age 16. Certain GCSEs are tiered; students take either higher-level (higher tier) or lower-level (foundation tier) exams, which may have different educational, career and psychological consequences. In particular, foundation tier entry, if…

  10. 50 CFR 86.53 - What are funding tiers?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 9 2014-10-01 2014-10-01 false What are funding tiers? 86.53 Section 86.53 Wildlife and Fisheries UNITED STATES FISH AND WILDLIFE SERVICE, DEPARTMENT OF THE INTERIOR... (BIG) PROGRAM How States Apply for Grants § 86.53 What are funding tiers? (a) This grant program will...

  11. The USGODAE Monterey Data Server

    NASA Astrophysics Data System (ADS)

    Sharfstein, P. J.; Dimitriou, D.; Hankin, S. C.

    2004-12-01

    With oversight from the U.S. Global Ocean Data Assimilation Experiment (GODAE) Steering Committee and funding from the Office of Naval Research, the USGODAE Monterey Data Server has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. Support of the Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Because of the broad range and diverse formats of data used by the GODAE community, presenting data with a consistent interface and ensuring its availability in standard formats is a primary challenge faced by the USGODAE Server project. To this end, all USGODAE data sets are available via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System

  12. 47 CFR 76.920 - Composition of the basic tier.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Composition of the basic tier. 76.920 Section 76.920 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... tier of video programming or to purchase any other video programming. ...

  13. 47 CFR 76.920 - Composition of the basic tier.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Composition of the basic tier. 76.920 Section 76.920 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... tier of video programming or to purchase any other video programming. ...

  14. Essential Features of Tier 2 Social-Behavioral Interventions

    ERIC Educational Resources Information Center

    Yong, Minglee; Cheney, Douglas A.

    2013-01-01

    The purpose of this study is to identify the essential features of Tier 2 interventions conducted within multitier systems of behavior support in schools. A systematic literature search identified 12 empirical studies that were coded and scored according to a list of Tier 2 specific RE-AIM criteria, related to the Reach, Effectiveness, Adoption,…

  15. General Framework for Animal Food Safety Traceability Using GS1 and RFID

    NASA Astrophysics Data System (ADS)

    Cao, Weizhu; Zheng, Limin; Zhu, Hong; Wu, Ping

    GS1 is global traceability standard, which is composed by the encoding system (EAN/UCC, EPC), the data carriers identified automatically (bar codes, RFID), electronic data interchange standards (EDI, XML). RFID is a non-contact, multi-objective automatic identification technique. Tracing of source food, standardization of RFID tags, sharing of dynamic data are problems to solve urgently for recent traceability systems. The paper designed general framework for animal food safety traceability using GS1 and RFID. This framework uses RFID tags encoding with EPCglobal tag data standards. Each information server has access tier, business tier and resource tier. These servers are heterogeneous and distributed, providing user access interfaces by SOAP or HTTP protocols. For sharing dynamic data, discovery service and object name service are used to locate dynamic distributed information servers.

  16. Examining the Effects and Feasibility of a Teacher-Implemented Tier 1 and Tier 2 Intervention in Word Reading, Fluency, and Comprehension

    ERIC Educational Resources Information Center

    Solari, Emily J.; Denton, Carolyn A.; Petscher, Yaacov; Haring, Christa

    2018-01-01

    This study investigates the effects and feasibility of an intervention for first-grade students at risk for reading difficulties or disabilities (RD). The intervention was provided by general education classroom teachers and consisted of 15 min whole-class comprehension lessons (Tier 1) and 30 min Tier 2 intervention sessions in word reading,…

  17. Windows Terminal Servers Orchestration

    NASA Astrophysics Data System (ADS)

    Bukowiec, Sebastian; Gaspar, Ricardo; Smith, Tim

    2017-10-01

    Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does not only reduce the time and effort necessary to deploy new instances, but also facilitates operations such as patching, analysis and recreation of compromised nodes as well as catering for workload peaks.

  18. Between-User Reliability of Tier 1 Exposure Assessment Tools Used Under REACH.

    PubMed

    Lamb, Judith; Galea, Karen S; Miller, Brian G; Hesse, Susanne; Van Tongeren, Martie

    2017-10-01

    When applying simple screening (Tier 1) tools to estimate exposure to chemicals in a given exposure situation under the Registration, Evaluation, Authorisation and restriction of CHemicals Regulation 2006 (REACH), users must select from several possible input parameters. Previous studies have suggested that results from exposure assessments using expert judgement and from the use of modelling tools can vary considerably between assessors. This study aimed to investigate the between-user reliability of Tier 1 tools. A remote-completion exercise and in person workshop were used to identify and evaluate tool parameters and factors such as user demographics that may be potentially associated with between-user variability. Participants (N = 146) generated dermal and inhalation exposure estimates (N = 4066) from specified workplace descriptions ('exposure situations') and Tier 1 tool combinations (N = 20). Interactions between users, tools, and situations were investigated and described. Systematic variation associated with individual users was minor compared with random between-user variation. Although variation was observed between choices made for the majority of input parameters, differing choices of Process Category ('PROC') code/activity descriptor and dustiness level impacted most on the resultant exposure estimates. Exposure estimates ranging over several orders of magnitude were generated for the same exposure situation by different tool users. Such unpredictable between-user variation will reduce consistency within REACH processes and could result in under-estimation or overestimation of exposure, risking worker ill-health or the implementation of unnecessary risk controls, respectively. Implementation of additional support and quality control systems for all tool users is needed to reduce between-assessor variation and so ensure both the protection of worker health and avoidance of unnecessary business risk management expenditure. © The Author 2017. Published

  19. 25 CFR 542.30 - What is a Tier B gaming operation?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What is a Tier B gaming operation? 542.30 Section 542.30 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.30 What is a Tier B gaming operation? A Tier B gaming operation is one with gross...

  20. 25 CFR 542.40 - What is a Tier C gaming operation?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What is a Tier C gaming operation? 542.40 Section 542.40 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.40 What is a Tier C gaming operation? A Tier C gaming operation is one with annual...

  1. 25 CFR 542.20 - What is a Tier A gaming operation?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What is a Tier A gaming operation? 542.20 Section 542.20 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.20 What is a Tier A gaming operation? A Tier A gaming operation is one with annual...

  2. 7 CFR 1940.327 - Tiering.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... environmental assessments and EISs. Tiering refers to the coverage of general matters in broader environmental... statements or environmental analyses incorporating by reference the broader matters and concentrating on the...

  3. 7 CFR 1940.327 - Tiering.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... environmental assessments and EISs. Tiering refers to the coverage of general matters in broader environmental... statements or environmental analyses incorporating by reference the broader matters and concentrating on the...

  4. 7 CFR 1940.327 - Tiering.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... environmental assessments and EISs. Tiering refers to the coverage of general matters in broader environmental... statements or environmental analyses incorporating by reference the broader matters and concentrating on the...

  5. 7 CFR 1940.327 - Tiering.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... environmental assessments and EISs. Tiering refers to the coverage of general matters in broader environmental... statements or environmental analyses incorporating by reference the broader matters and concentrating on the...

  6. 7 CFR 1940.327 - Tiering.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... environmental assessments and EISs. Tiering refers to the coverage of general matters in broader environmental... statements or environmental analyses incorporating by reference the broader matters and concentrating on the...

  7. MARSIS data and simulation exploited using array databases: PlanetServer/EarthServer for sounding radars

    NASA Astrophysics Data System (ADS)

    Cantini, Federico; Pio Rossi, Angelo; Orosei, Roberto; Baumann, Peter; Misev, Dimitar; Oosthoek, Jelmer; Beccati, Alan; Campalani, Piero; Unnithan, Vikram

    2014-05-01

    parallel computing has been developed and tested on a Tier 0 class HPC cluster computer located at CINECA, Bologna, Italy, to produce accurate simulations for the entire MARSIS dataset. Although the necessary computational resources have not yet been secured, through the HPC cluster at Jacobs University in Bremen it was possible to simulate a significant subset of orbits covering the area of the Medusae Fossae Formation (MFF), a seeimingly soft, easily eroded deposit that extends for nearly 1,000 km along the equator of Mars (e.g. Watters et al., 2007; Carter et al., 2009). Besides the MARSIS data, simulation of MARSIS surface clutter signal are included in the db to further improve its scientific value. Simulations will be available throught the project portal to end users/scientists and they will eventually be provided in the PSA/PDS archives. References: Baumann, P. On the management of multidimensional discrete data. VLDB J. 4 (3), 401-444, Special Issue on Spatial Database Systems, 1994. Carter, L. M., Campbell, B. A., Watters, T. R., Phillips, R. J., Putzig, N. E., Safaeinili, A., Plaut, J., Okubo, C., Egan, A. F., Biccari, D., Orosei, R. (2009). Shallow radar (SHARAD) sounding observations of the Medusae Fossae Formation, Mars. Icarus, 199(2), 295-302. Nouvel, J.-F., Herique, A., Kofman, W., Safaeinili, A. 2004. Radar signal simulation: Surface modeling with the Facet Method. Radio Science 39, 1013. Oosthoek, J.H.P, Flahaut J., Rossi, A. P., Baumann, P., Misev, D., Campalani, P., Unnithan, V. (2013) PlanetServer: Innovative Approaches for the Online Analysis of Hyperspectral Satellite Data from Mars, Advances in Space Research. DOI: 10.1016/j.asr.2013.07.002 Picardi, G., and 33 colleagues 2005. Radar Soundings of the Subsurface of Mars. Science 310, 1925-1928. Rossi, A. P., Baumann, P., Oosthoek, J., Beccati, A., Cantini, F., Misev, D. Orosei, R., Flahaut, J., Campalani, P., Unnithan, V. (2014),Geophys. Res. Abs., Vol. 16, #EGU2014-5149, this meeting. Watters, T. R

  8. Greenberger-Horne-Zeilinger states-based blind quantum computation with entanglement concentration.

    PubMed

    Zhang, Xiaoqian; Weng, Jian; Lu, Wei; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing

    2017-09-11

    In blind quantum computation (BQC) protocol, the quantum computability of servers are complicated and powerful, while the clients are not. It is still a challenge for clients to delegate quantum computation to servers and keep the clients' inputs, outputs and algorithms private. Unfortunately, quantum channel noise is unavoidable in the practical transmission. In this paper, a novel BQC protocol based on maximally entangled Greenberger-Horne-Zeilinger (GHZ) states is proposed which doesn't need a trusted center. The protocol includes a client and two servers, where the client only needs to own quantum channels with two servers who have full-advantage quantum computers. Two servers perform entanglement concentration used to remove the noise, where the success probability can almost reach 100% in theory. But they learn nothing in the process of concentration because of the no-signaling principle, so this BQC protocol is secure and feasible.

  9. Leo Satellite Communication through a LEO Constellation using TCP/IP Over ATM

    NASA Technical Reports Server (NTRS)

    Foore, Lawrence R.; Konangi, Vijay K.; Wallett, Thomas M.

    1999-01-01

    The simulated performance characteristics for communication between a terrestrial client and a Low Earth Orbit (LEO) satellite server are presented. The client and server nodes consist of a Transmission Control Protocol /Internet Protocol (TCP/IP) over ATM configuration. The ATM cells from the client or the server are transmitted to a gateway, packaged with some header information and transferred to a commercial LEO satellite constellation. These cells are then routed through the constellation to a gateway on the globe that allows the client/server communication to take place. Unspecified Bit Rate (UBR) is specified as the quality of service (QoS). Various data rates are considered.

  10. Hsp90: Friends, clients and natural foes.

    PubMed

    Verma, Sharad; Goyal, Sukriti; Jamal, Salma; Singh, Aditi; Grover, Abhinav

    2016-08-01

    Hsp90, a homodimeric ATPase, is responsible for the correct folding of a number of newly synthesized polypeptides in addition to the correct folding of denatured/misfolded client proteins. It requires several co-chaperones and other partner proteins for chaperone activity. Due to the involvement of Hsp90-dependent client proteins in a variety of oncogenic signaling pathways, Hsp90 inhibition has emerged as one of the leading strategies for anticancer chemotherapeutics. Most of Hsp90 inhibitors blocks the N terminal ATP binding pocket and prevents the conformational changes which are essential for the loading of co-chaperones and client proteins. Several other inhibitors have also been reported which disrupt chaperone cycle in ways other than binding to N terminal ATP binding pocket. The Hsp90 inhibition is associated with heat shock response, mediated by HSF-1, to overcome the loss of Hsp90 and sustain cell survival. This review is an attempt to give an over view of all the important players of chaperone cycle. Copyright © 2016 Elsevier B.V. and Société Française de Biochimie et Biologie Moléculaire (SFBBM). All rights reserved.

  11. The Development of the Puerto Rico Lightning Detection Network for Meteorological Research

    NASA Technical Reports Server (NTRS)

    Legault, Marc D.; Miranda, Carmelo; Medin, J.; Ojeda, L. J.; Blakeslee, Richard J.

    2011-01-01

    A land-based Puerto Rico Lightning Detection Network (PR-LDN) dedicated to the academic research of meteorological phenomena has being developed. Five Boltek StormTracker PCI-Receivers with LTS-2 Timestamp Cards with GPS and lightning detectors were integrated to Pentium III PC-workstations running the CentOS linux operating system. The Boltek detector linux driver was compiled under CentOS, modified, and thoroughly tested. These PC-workstations with integrated lightning detectors were installed at five of the University of Puerto Rico (UPR) campuses distributed around the island of PR. The PC-workstations are left on permanently in order to monitor lightning activity at all times. Each is networked to their campus network-backbone permitting quasi-instantaneous data transfer to a central server at the UPR-Bayam n campus. Information generated by each lightning detector is managed by a C-program developed by us called the LDN-client. The LDN-client maintains an open connection to the central server operating the LDN-server program where data is sent real-time for analysis and archival. The LDN-client also manages the storing of data on the PC-workstation hard disk. The LDN-server software (also an in-house effort) analyses the data from each client and performs event triangulations. Time-of-arrival (TOA) and related hybrid algorithms, lightning-type and event discriminating routines are also implemented in the LDN-server software. We also have developed software to visually monitor lightning events in real-time from all clients and the triangulated events. We are currently monitoring and studying the spatial, temporal, and type distribution of lightning strikes associated with electrical storms and tropical cyclones in the vicinity of Puerto Rico.

  12. Passive Detection of Misbehaving Name Servers

    DTIC Science & Technology

    2013-10-01

    Passive Detection of Misbehaving Name Servers Leigh B. Metcalf Jonathan M. Spring October 2013 TECHNICAL REPORT CMU/SEI-2013-TR-010 ESC-TR...Detection of Misbehaving Name Servers 5. FUNDING NUMBERS FA8721-05-C-0003 6. AUTHOR(S) Leigh B. Metcalf and Jonathan M. Spring 7. PERFORMING

  13. Use of Deception to Improve Client Honeypot Detection of Drive-by-Download Attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popovsky, Barbara; Narvaez Suarez, Julia F.; Seifert, Christian

    2009-07-24

    This paper presents the application of deception theory to improve the success of client honeypots at detecting malicious web page attacks from infected servers programmed by online criminals to launch drive-by-download attacks. The design of honeypots faces three main challenges: deception, how to design honeypots that seem real systems; counter-deception, techniques used to identify honeypots and hence defeating their deceiving nature; and counter counter-deception, how to design honeypots that deceive attackers. The authors propose the application of a deception model known as the deception planning loop to identify the current status on honeypot research, development and deployment. The analysis leadsmore » to a proposal to formulate a landscape of the honeypot research and planning of steps ahead.« less

  14. A Tiered Model for Linking Students to the Community

    ERIC Educational Resources Information Center

    Meyer, Laura Landry; Gerard, Jean M.; Sturm, Michael R.; Wooldridge, Deborah G.

    2016-01-01

    A tiered practice model (introductory, pre-internship, and internship) embedded in the curriculum facilitates community engagement and creates relevance for students as they pursue a professional identity in Human Development and Family Studies. The tiered model integrates high-impact teaching practices (HIP) and student engagement pedagogies…

  15. Graph and Network for Model Elicitation (GNOME Phase 2)

    DTIC Science & Technology

    2013-02-01

    10 3.3 GNOME UI Components for NOEM Web Client...20 Figure 17: Sampling in Web -client...the web -client). The server-side service can run and generate data asynchronously, allowing a cluster of servers to run the sampling. Also, a

  16. 40 CFR 86.1861-04 - How do the Tier 2 and interim non-Tier 2 NOX averaging, banking and trading programs work?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 2 NOX averaging, banking and trading programs work? 86.1861-04 Section 86.1861-04 Protection of... work? (a) General provisions for Tier 2 credits and debits. (1) A manufacturer whose Tier 2 fleet... to a full useful life of 100,000 miles, provided that the credits are prorated by a multiplicative...

  17. Vaccine-Elicited Tier 2 HIV-1 Neutralizing Antibodies Bind to Quaternary Epitopes Involving Glycan-Deficient Patches Proximal to the CD4 Binding Site

    PubMed Central

    Crooks, Ema T.; Tong, Tommy; Chakrabarti, Bimal; Narayan, Kristin; Georgiev, Ivelin S.; Menis, Sergey; Huang, Xiaoxing; Kulp, Daniel; Osawa, Keiko; Muranaka, Janelle; Stewart-Jones, Guillaume; Destefano, Joanne; O’Dell, Sijy; LaBranche, Celia; Robinson, James E.; Montefiori, David C.; McKee, Krisha; Du, Sean X.; Doria-Rose, Nicole; Kwong, Peter D.; Mascola, John R.; Zhu, Ping; Schief, William R.; Wyatt, Richard T.; Whalen, Robert G.; Binley, James M.

    2015-01-01

    Eliciting broad tier 2 neutralizing antibodies (nAbs) is a major goal of HIV-1 vaccine research. Here we investigated the ability of native, membrane-expressed JR-FL Env trimers to elicit nAbs. Unusually potent nAb titers developed in 2 of 8 rabbits immunized with virus-like particles (VLPs) expressing trimers (trimer VLP sera) and in 1 of 20 rabbits immunized with DNA expressing native Env trimer, followed by a protein boost (DNA trimer sera). All 3 sera neutralized via quaternary epitopes and exploited natural gaps in the glycan defenses of the second conserved region of JR-FL gp120. Specifically, trimer VLP sera took advantage of the unusual absence of a glycan at residue 197 (present in 98.7% of Envs). Intriguingly, removing the N197 glycan (with no loss of tier 2 phenotype) rendered 50% or 16.7% (n = 18) of clade B tier 2 isolates sensitive to the two trimer VLP sera, showing broad neutralization via the surface masked by the N197 glycan. Neutralizing sera targeted epitopes that overlap with the CD4 binding site, consistent with the role of the N197 glycan in a putative “glycan fence” that limits access to this region. A bioinformatics analysis suggested shared features of one of the trimer VLP sera and monoclonal antibody PG9, consistent with its trimer-dependency. The neutralizing DNA trimer serum took advantage of the absence of a glycan at residue 230, also proximal to the CD4 binding site and suggesting an epitope similar to that of monoclonal antibody 8ANC195, albeit lacking tier 2 breadth. Taken together, our data show for the first time that strain-specific holes in the glycan fence can allow the development of tier 2 neutralizing antibodies to native spikes. Moreover, cross-neutralization can occur in the absence of protecting glycan. Overall, our observations provide new insights that may inform the future development of a neutralizing antibody vaccine. PMID:26023780

  18. Vaccine-Elicited Tier 2 HIV-1 Neutralizing Antibodies Bind to Quaternary Epitopes Involving Glycan-Deficient Patches Proximal to the CD4 Binding Site.

    PubMed

    Crooks, Ema T; Tong, Tommy; Chakrabarti, Bimal; Narayan, Kristin; Georgiev, Ivelin S; Menis, Sergey; Huang, Xiaoxing; Kulp, Daniel; Osawa, Keiko; Muranaka, Janelle; Stewart-Jones, Guillaume; Destefano, Joanne; O'Dell, Sijy; LaBranche, Celia; Robinson, James E; Montefiori, David C; McKee, Krisha; Du, Sean X; Doria-Rose, Nicole; Kwong, Peter D; Mascola, John R; Zhu, Ping; Schief, William R; Wyatt, Richard T; Whalen, Robert G; Binley, James M

    2015-05-01

    Eliciting broad tier 2 neutralizing antibodies (nAbs) is a major goal of HIV-1 vaccine research. Here we investigated the ability of native, membrane-expressed JR-FL Env trimers to elicit nAbs. Unusually potent nAb titers developed in 2 of 8 rabbits immunized with virus-like particles (VLPs) expressing trimers (trimer VLP sera) and in 1 of 20 rabbits immunized with DNA expressing native Env trimer, followed by a protein boost (DNA trimer sera). All 3 sera neutralized via quaternary epitopes and exploited natural gaps in the glycan defenses of the second conserved region of JR-FL gp120. Specifically, trimer VLP sera took advantage of the unusual absence of a glycan at residue 197 (present in 98.7% of Envs). Intriguingly, removing the N197 glycan (with no loss of tier 2 phenotype) rendered 50% or 16.7% (n = 18) of clade B tier 2 isolates sensitive to the two trimer VLP sera, showing broad neutralization via the surface masked by the N197 glycan. Neutralizing sera targeted epitopes that overlap with the CD4 binding site, consistent with the role of the N197 glycan in a putative "glycan fence" that limits access to this region. A bioinformatics analysis suggested shared features of one of the trimer VLP sera and monoclonal antibody PG9, consistent with its trimer-dependency. The neutralizing DNA trimer serum took advantage of the absence of a glycan at residue 230, also proximal to the CD4 binding site and suggesting an epitope similar to that of monoclonal antibody 8ANC195, albeit lacking tier 2 breadth. Taken together, our data show for the first time that strain-specific holes in the glycan fence can allow the development of tier 2 neutralizing antibodies to native spikes. Moreover, cross-neutralization can occur in the absence of protecting glycan. Overall, our observations provide new insights that may inform the future development of a neutralizing antibody vaccine.

  19. EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2013-04-01

    EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be

  20. Interception and modification of network authentication packets with the purpose of allowing alternative authentication modes

    DOEpatents

    Kent, Alexander Dale [Los Alamos, NM

    2008-09-02

    Methods and systems in a data/computer network for authenticating identifying data transmitted from a client to a server through use of a gateway interface system which are communicately coupled to each other are disclosed. An authentication packet transmitted from a client to a server of the data network is intercepted by the interface, wherein the authentication packet is encrypted with a one-time password for transmission from the client to the server. The one-time password associated with the authentication packet can be verified utilizing a one-time password token system. The authentication packet can then be modified for acceptance by the server, wherein the response packet generated by the server is thereafter intercepted, verified and modified for transmission back to the client in a similar but reverse process.

  1. The effect of a three-tier formulary on antidepressant utilization and expenditures.

    PubMed

    Hodgkin, Dominic; Parks Thomas, Cindy; Simoni-Wastila, Linda; Ritter, Grant A; Lee, Sue

    2008-06-01

    Health plans in the United States are struggling to contain rapid growth in their spending on medications. They have responded by implementing multi-tiered formularies, which label certain brand medications 'non-preferred' and require higher patient copayments for those medications. This multi-tier policy relies on patients' willingness to switch medications in response to copayment differentials. The antidepressant class has certain characteristics that may pose problems for implementation of three-tier formularies, such as differences in which medication works for which patient, and high rates of medication discontinuation. To measure the effect of a three-tier formulary on antidepressant utilization and spending, including decomposing spending allocations between patient and plan. We use claims and eligibility files for a large, mature nonprofit managed care organization that started introducing its three-tier formulary on January 1, 2000, with a staggered implementation across employer groups. The sample includes 109,686 individuals who were continuously enrolled members during the study period. We use a pretest-posttest quasi-experimental design that includes a comparison group, comprising members whose employer had not adopted three-tier as of March 1, 2000. This permits some control for potentially confounding changes that could have coincided with three-tier implementation. For the antidepressants that became nonpreferred, prescriptions per enrollee decreased 11% in the three-tier group and increased 5% in the comparison group. The own-copay elasticity of demand for nonpreferred drugs can be approximated as -0.11. Difference-in-differences regression finds that the three-tier formulary slowed the growth in the probability of using antidepressants in the post-period, which was 0.3 percentage points lower than it would have been without three-tier. The three-tier formulary also increased out-of-pocket payments while reducing plan payments and total spending

  2. Clients' collaboration in therapy: Self-perceptions and relationships with client psychological functioning, interpersonal relations, and motivation.

    PubMed

    Bachelor, Alexandra; Laverdière, Olivier; Gamache, Dominick; Bordeleau, Vincent

    2007-06-01

    To gain a closer understanding of client collaboration and its determinants, the first goal of this study involved the investigation of clients' perceptions of collaboration using a discovery-oriented methodology. Content analysis of 30 clients' written descriptions revealed three different modes of client collaboration, labeled active, mutual, and therapist-dependent, which emphasized client initiative and active participation, joint participation, and reliance on therapists' contributions to the work and change process, respectively. The majority of clients valued the therapist's active involvement and also emphasized the helpfulness of their collaborative experiences. In general, the therapist actions and attitudes involved in clients' views of good collaboration varied among clients. A second goal was to examine the relationships between client psychological functioning, quality of interpersonal relationships, and motivation, and clients' collaborative contributions, as rated by clients and therapists. Of these, only motivation was significantly associated with client collaboration, particularly in the perceptions of therapists. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  3. Predicting caregiver burden in general veterinary clients: Contribution of companion animal clinical signs and problem behaviors.

    PubMed

    Spitznagel, M B; Jacobson, D M; Cox, M D; Carlson, M D

    2018-06-01

    Caregiver burden, found in many clients with a chronically or terminally ill companion animal, has been linked to poorer psychosocial function in the client and greater utilization of non-billable veterinary services. To reduce client caregiver burden, its determinants must first be identified. This study examined if companion animal clinical signs and problem behaviors predict veterinary client burden within broader client- and patient-based risk factor models. Data were collected in two phases. Phase 1 included 238 companion animal owners, including those with a sick companion animal (n=119) and matched healthy controls (n=119) recruited online. Phase 2 was comprised of 602 small animal general veterinary hospital clients (n=95 with a sick dog or cat). Participants completed cross-sectional online assessments of caregiver burden, psychosocial resources (social support, active coping, self-mastery), and an item pool of companion animal clinical signs and problem behaviors. Several signs/behaviors correlated with burden, most prominently: weakness, appearing sad/depressed or anxious, appearing to have pain/discomfort, change in personality, frequent urination, and excessive sleeping/lethargy. Within patient-based risk factors, caregiver burden was predicted by frequency of the companion animal's signs/behaviors (P<.01). Within client-based factors, potentially modifiable factors of client reaction to the animal's signs/behaviors (P=.01), and client sense of control (P<.04) predicted burden. Understanding burden may enhance veterinarian-client communication, and is important due to potential downstream effects of client burden, such as higher workload for the veterinarian. Supporting the client's sense of control may help alleviate burden when amelioration of the companion animal's presentation is not feasible. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Enhanced networked server management with random remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2003-08-01

    In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.

  5. The Therapeutic Alliance: Clients' Categorization of Client-Identified Factors

    ERIC Educational Resources Information Center

    Simpson, Arlene J.; Bedi, Robinder P.

    2012-01-01

    Clients' perspectives on the therapeutic alliance were examined using written descriptions of factors that clients believed to be helpful in developing a strong alliance. Fifty participants sorted previously collected statements into thematically similar piles and then gave each set of statements a title. Multivariate concept mapping statistical…

  6. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology

    PubMed Central

    Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804

  7. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology.

    PubMed

    Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.

  8. Effects of a Tier 3 Self-Management Intervention Implemented with and without Treatment Integrity

    ERIC Educational Resources Information Center

    Lower, Ashley; Young, K. Richard; Christensen, Lynnette; Caldarella, Paul; Williams, Leslie; Wills, Howard

    2016-01-01

    This study investigated the effects of a Tier 3 peer-matching self-management intervention on two elementary school students who had previously been less responsive to Tier 1 and Tier 2 interventions. The Tier 3 self-management intervention, which was implemented in the general education classrooms, included daily electronic communication between…

  9. Four peer reviews in support of the Tier 3 rulemaking ...

    EPA Pesticide Factsheets

    Peer review of ERG's KenCaryl (CO) estimated summer hot-soak distributions report in support of the Tier 3 rulemaking To peer review ERG's KenCaryl (CO) estimated summer hot-soak distributions report (for Tier 3 rulemaking)

  10. An Internet supported workflow for the publication process in UMVF (French Virtual Medical University).

    PubMed

    Renard, Jean-Marie; Bourde, Annabel; Cuggia, Marc; Garcelon, Nicolas; Souf, Nathalie; Darmoni, Stephan; Beuscart, Régis; Brunetaud, Jean-Marc

    2007-01-01

    The " Université Médicale Virtuelle Francophone" (UMVF) is a federation of French medical schools. Its main goal is to share the production and use of pedagogic medical resources generated by academic medical teachers. We developed an Open-Source application based upon a workflow system, which provides an improved publication process for the UMVF. For teachers, the tool permits easy and efficient upload of new educational resources. For web masters it provides a mechanism to easily locate and validate the resources. For librarian it provide a way to improve the efficiency of indexation. For all, the utility provides a workflow system to control the publication process. On the students side, the application improves the value of the UMVF repository by facilitating the publication of new resources and by providing an easy way to find a detailed description of a resource and to check any resource from the UMVF to ascertain its quality and integrity, even if the resource is an old deprecated version. The server tier of the application is used to implement the main workflow functionalities and is deployed on certified UMVF servers using the PHP language, an LDAP directory and an SQL database. The client tier of the application provides both the workflow and the search and check functionalities. A unique signature for each resource, was needed to provide security functionality and is implemented using a Digest algorithm. The testing performed by Rennes and Lille verified the functionality and conformity with our specifications.

  11. Effects of a Tier 3 Phonological Awareness Intervention on Preschoolers' Emergent Literacy

    ERIC Educational Resources Information Center

    Noe, Sean; Spencer, Trina D.; Kruse, Lydia; Goldstein, Howard

    2014-01-01

    This multiple baseline design study examined the effects of a Tier 3 early literacy intervention on low-income preschool children's phonological awareness (PA). Seven preschool children who did not make progress on identifying first sounds in words during a previous Tier 2 intervention participated in a more intensive Tier 3 intervention. Children…

  12. Use of Self-Monitoring to Maintain Program Fidelity of Multi-Tiered Interventions

    ERIC Educational Resources Information Center

    Nelson, J. Ron; Oliver, Regina M.; Hebert, Michael A.; Bohaty, Janet

    2015-01-01

    Multi-tiered system of supports represents one of the most significant advancements in improving the outcomes of students for whom typical instruction is not effective. While many practices need to be in place to make multi-tiered systems of support effective, accurate implementation of evidence-based practices by individuals at all tiers is…

  13. The Most Popular Astronomical Web Server in China

    NASA Astrophysics Data System (ADS)

    Cui, Chenzhou; Zhao, Yongheng

    Affected by the consistent depressibility of IT economy free homepage space is becoming less and less. It is more and more difficult to construct websites for amateur astronomers who do not have ability to pay for commercial space. In last May with the support of Chinese National Astronomical Observatory and Large Sky Area Multi-Object Fiber Spectroscopic Telescope project we setup a special web server (amateur.lamost.org) to provide free huge stable and no-advertisement homepage space to Chinese amateur astronomers and non-professional organizations. After only one year there has been more than 80 websites hosted on the server. More than 10000 visitors from nearly 40 countries visit the server and the amount of data downloaded by them exceeds 4 Giga-Bytes per day. The server has become the most popular amateur astronomical web server in China. It stores the most abundant Chinese amateur astronomical resources. Because of the extremely success our service has been drawing tremendous attentions from related institutions. Recently Chinese National Natural Science Foundation shows great interest to support the service. In the paper the emergence of the thought construction of the server and its present utilization and our future plan are introduced

  14. Evaluation of Fourth-Year Veterinary Students' Client Communication Skills: Recommendations for Scaffolded Instruction and Practice.

    PubMed

    Stevens, Brenda J; Kedrowicz, April A

    Effective client communication is important for success in veterinary practice. The purpose of this project was to describe one approach to communication training and explore fourth-year veterinary students' communication skills through an evaluation of their interactions with clients during a general practice rotation. Two raters coded 20 random videotaped interactions simultaneously to assess students' communication, including their ability to initiate the session, incorporate open-ended questions, listen reflectively, express empathy, incorporate appropriate nonverbal communication, and attend to organization and sequencing. We provide baseline data that will guide future instruction in client communication. Results showed that students' communication skills require development. Half of the students sampled excelled at open-ended inquiry (n=10), and 40% (n=8) excelled at nonverbal communication. Students needed improvement on greeting clients by name and introducing themselves and their role (n=15), reflective listening (n=18), empathy (n=17), and organization and sequencing (n=18). These findings suggest that more focused instruction and practice is necessary in maintaining an organized structure, reflective listening, and empathy to create a relationship-centered approach to care.

  15. Positive Behavior Supports: Tier 2 Interventions in Middle Schools

    ERIC Educational Resources Information Center

    Hoyle, Carol G.; Marshall, Kathleen J.; Yell, Mitchell L.

    2011-01-01

    School personnel are using Schoolwide Positive Behavior Supports in public schools throughout the United States. A number of studies have evaluated the universal level, or Tier 1, of Schoolwide Positive Behavior Supports. In this study, the authors describe and analyze the interventions offered as options for use for Tier 2 in middle schools…

  16. CIVET: Continuous Integration, Verification, Enhancement, and Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alger, Brian; Gaston, Derek R.; Permann, Cody J

    A Git server (GitHub, GitLab, BitBucket) sends event notifications to the Civet server. These are either a " Pull Request" or a "Push" notification. Civet then checks the database to determine what tests need to be run and marks them as ready to run. Civet clients, running on dedicated machines, query the server for available jobs that are ready to run. When a client gets a job it executes the scripts attached to the job and report back to the server the output and exit status. When the client updates the server, the server will also update the Git servermore » with the result of the job, as well as updating the main web page.« less

  17. A Lightweight TwiddleNet Portal

    DTIC Science & Technology

    2008-03-01

    MAC PHY Media Independent Handover Function (Network)ES CS IS 802 Interface Client Station 802 Networks 3GPP 3 GPP 2 Network MIH Network...Entity ( . e g ). MIH Server Controller Media Independent Handover Function (Client) Information Service over L2 Transport Remote MIH Events over L2...Co-located with MIH MGMTMGMT E v e n t s E S C S IS E v e n t s E v e n t s E v e n t s Media Independent Handover User Media

  18. An Evaluation of Army Wellness Center Clients' Health-Related Outcomes.

    PubMed

    Rivera, L Omar; Ford, Jessica Danielle; Hartzell, Meredith Marie; Hoover, Todd Allan

    2018-01-01

    To examine whether Army community members participating in a best-practice based workplace health promotion program (WHPP) experience goal-moderated improvements in health-related outcomes. Pretest/posttest outcome evaluation examining an autonomously participating client cohort over 1 year. Army Wellness Center facilities on 19 Army installations. Army community members sample (N = 5703), mostly Active Duty Soldiers (64%). Assessment of health risks with feedback, health assessments, health education classes, and health coaching sessions conducted by health educators at a recommended frequency of once a month for 3 to 12 months. Initial and follow-up outcome assessments of body mass index (BMI), body fat, cardiorespiratory fitness, blood pressure, and perceived stress. Mixed model linear regression testing for goal-moderated improvements in outcomes. Clients experienced significant improvements in body fat (-2% change), perceived stress (-6% to -12% change), cardiorespiratory fitness (+6% change), and blood pressure (-1% change) regardless of health-related goal. Only clients with a weight loss goal experienced BMI improvement (-1% change). Follow-up outcome assessment rates ranged from 44% (N = 2509) for BMI to 6% (N = 342) for perceived stress. Army Wellness Center clients with at least 1 follow-up outcome assessment experienced improvements in military readiness correlates and chronic disease risk factors. Evaluation design and follow-up-related limitations notwithstanding results suggest that best practices in WHPPs can effectively serve a globally distributed military force.

  19. To Wait in Tier 1 or Intervene Immediately: A Randomized Experiment Examining First Grade Response to Intervention (RTI) in Reading.

    PubMed

    Al Otaiba, Stephanie; Connor, Carol M; Folsom, Jessica S; Wanzek, Jeanne; Greulich, Luana; Schatschneider, Christopher; Wagner, Richard K

    2014-10-01

    This randomized controlled experiment compared the efficacy of two Response to Intervention (RTI) models - Typical RTI and Dynamic RTI - and included 34 first-grade classrooms ( n = 522 students) across 10 socio-economically and culturally diverse schools. Typical RTI was designed to follow the two-stage RTI decision rules that wait to assess response to Tier 1 in many districts, whereas Dynamic RTI provided Tier 2 or Tier 3 interventions immediately according to students' initial screening results. Interventions were identical across conditions except for when intervention began. Reading assessments included letter-sound, word, and passage reading, and teacher-reported severity of reading difficulties. An intent-to-treat analysis using multi-level modeling indicated an overall effect favoring the Dynamic RTI condition ( d = .36); growth curve analyses demonstrated that students in Dynamic RTI showed an immediate score advantage, and effects accumulated across the year. Analyses of standard score outcomes confirmed that students in the Dynamic condition who received Tier 2 and Tier 3 ended the study with significantly higher reading performance than students in the Typical condition. Implications for RTI implementation practice and for future research are discussed.

  20. To Wait in Tier 1 or Intervene Immediately: A Randomized Experiment Examining First Grade Response to Intervention (RTI) in Reading

    PubMed Central

    Al Otaiba, Stephanie; Connor, Carol M.; Folsom, Jessica S.; Wanzek, Jeanne; Greulich, Luana; Schatschneider, Christopher; Wagner, Richard K.

    2014-01-01

    This randomized controlled experiment compared the efficacy of two Response to Intervention (RTI) models – Typical RTI and Dynamic RTI - and included 34 first-grade classrooms (n = 522 students) across 10 socio-economically and culturally diverse schools. Typical RTI was designed to follow the two-stage RTI decision rules that wait to assess response to Tier 1 in many districts, whereas Dynamic RTI provided Tier 2 or Tier 3 interventions immediately according to students’ initial screening results. Interventions were identical across conditions except for when intervention began. Reading assessments included letter-sound, word, and passage reading, and teacher-reported severity of reading difficulties. An intent-to-treat analysis using multi-level modeling indicated an overall effect favoring the Dynamic RTI condition (d = .36); growth curve analyses demonstrated that students in Dynamic RTI showed an immediate score advantage, and effects accumulated across the year. Analyses of standard score outcomes confirmed that students in the Dynamic condition who received Tier 2 and Tier 3 ended the study with significantly higher reading performance than students in the Typical condition. Implications for RTI implementation practice and for future research are discussed. PMID:25530622

  1. The real relationship inventory: development and psychometric investigation of the client form.

    PubMed

    Kelley, Frances A; Gelso, Charles J; Fuertes, Jairo N; Marmarosh, Cheri; Lanier, Stacey Holmes

    2010-12-01

    The development and validation of a client version of the Real Relationship Inventory (RRI-C) is reported. Using a sample of clients (n = 94) who were currently in psychotherapy, a 24-item measure was developed consisting of two subscales (Realism and Genuineness) and a total score. This 24-item version and other measures used for validation were completed by 93 additional clients. Results of the present study offer initial support for the validity and reliability of the RRI-C. The RRI-C correlated significantly in theoretically expected ways with measures of the client-rated working alliance and therapists' congruence, clients' observing ego, and client ratings of client and therapist real relationship on an earlier measure of the real relationship (Eugster & Wampold, 1996). A nonsignificant relation was found between the RRI-C and a measure of social desirability, providing support for discriminant validity. A confirmatory factor analysis supported the two theorized factors of the RRI-C. The authors discuss the importance of measuring clients' perceptions of the real relationship. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  2. Clients' and workers' perceptions on clients' functional ability and need for help: home care in municipalities.

    PubMed

    Hammar, Teija; Perälä, Marja-Leena; Rissanen, Pekka

    2009-03-01

    The aim of the study was to compare clients' and named home care (HC) workers' perceptions of clients' functional ability (FA) and need for help and to analyse which client- and municipality-related factors are associated with perceptions of client's FA. The total of 686 Finnish HC clients was interviewed in 2001. Further, the questionnaire was sent to 686 HC workers. FA was assessed by activities of daily living (ADL), which included both basic/physical (PADL) and instrumental (IADL) activities. The association between client's FA and municipality-related variables was analysed by using hierarchical logistic regression models. The findings indicated that clients' and HC-workers' perceptions about what the clients were able to do were similar in the PADL functions, but perceptions differed when it comes to the IADL functions for mobility and in climbing stairs. A smaller proportion of clients compared with HC workers assessed themselves to be in need of help in all ADL functions. Use of home help and bathing services increased the probability of belonging to the 'poor' FA class while living alone and small size of municipality decreased the probability. The study indicates that although clients and workers assessed client's FA fairly similarly, there were major differences in perceptions concerning clients' needs for help in ADL functions. Clients' and workers' shared view of need for help forms a basis for high-quality care. Therefore, the perception of both the clients and workers must be taken into account when planning care and services. There was also variation in clients' FA between municipalities, although only the size of municipality had some association with the variation. The probability that clients with a lower FA are cared for in HC is higher if the clients live in large- rather than small-sized municipalities. This may reflect a better mix of services and resources in large-sized municipalities.

  3. Developing the Capacity to Implement Tier 2 and Tier 3 Supports: How Do We Support Our Faculty and Staff in Preparing for Sustainability?

    ERIC Educational Resources Information Center

    Oakes, Wendy Peia; Lane, Kathleen Lynne; Germer, Kathryn A.

    2014-01-01

    School-site and district-level leadership teams rely on the existing knowledge base to select, implement, and evaluate evidence-based practices meeting students' multiple needs within the context of multitiered systems of support. The authors focus on the stages of implementation science as applied to Tier 2 and Tier 3 supports; the…

  4. Enhancing Clients' Communication Regarding Goals for Using Psychiatric Medications.

    PubMed

    Deegan, Patricia E; Carpenter-Song, Elizabeth; Drake, Robert E; Naslund, John A; Luciano, Alison; Hutchison, Shari L

    2017-08-01

    Discordance between psychiatric care providers' and clients' goals for medication treatment is prevalent and is a barrier to person-centered care. Power statements-short self-advocacy statements prepared by clients in response to a two-part template-offer a novel approach to help clients clarify and communicate their personal goals for using psychiatric medications. This study described the power statement method and examined a sample of power statements to understand clients' goals for medication treatment. More than 17,000 adults with serious mental illness at 69 public mental health clinics had the option to develop power statements by using a Web application located in the clinic waiting areas. A database query determined the percentage of clients who entered power statements into the Web application. The authors examined textual data from a random sample of 300 power statements by using content analysis. Nearly 14,000 (79%) clients developed power statements. Of the 277 statements in the sample deemed appropriate for content analysis, 272 statements had responses to the first part of the template and 230 had responses to the second part. Clients wanted psychiatric medications to help control symptoms in the service of improving functioning. Common goals for taking psychiatric medications (N=230 statements) were to enhance relationships (51%), well-being (32%), self-sufficiency (23%), employment (19%), hobbies (15%), and self-improvement (10%). People with serious mental illness typically viewed medications as a means to pursue meaningful life goals. Power statements appear to be a simple and scalable technique to enhance clients' communication of their goals for psychiatric medication treatment.

  5. 40 CFR 1043.50 - Approval of methods to meet Tier 1 retrofit NOX standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Approval of methods to meet Tier 1... SUBJECT TO THE MARPOL PROTOCOL § 1043.50 Approval of methods to meet Tier 1 retrofit NOX standards... enable Pre-Tier 1 engines to meet the Tier 1 NOX standard of regulation 13 of Annex VI. Any person may...

  6. Scaling NS-3 DCE Experiments on Multi-Core Servers

    DTIC Science & Technology

    2016-06-15

    that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on

  7. CMS tier structure and operation of the experiment-specific tasks in Germany

    NASA Astrophysics Data System (ADS)

    Nowack, A.

    2008-07-01

    In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as four non-LHC experiments. With respect to the CMS experiment, GridKa is mainly involved in central tasks. The Tier 2 centre in Germany consists of two sites, one at the research centre DESY at Hamburg and one at RWTH Aachen University, forming a federated Tier 2 centre. Both parts cover different aspects of a Tier 2 centre. The German Tier 3 centres are located at the research centre DESY at Hamburg, at RWTH Aachen University, and at the University of Karlsruhe. Furthermore the building of a German user analysis facility is planned. Since the CMS community in German is rather small, a good cooperation between the different sites is essential. This cooperation includes physical topics as well as technical and operational issues. All available communication channels such as email, phone, monthly video conferences, and regular personal meetings are used. For example, the distribution of data sets is coordinated globally within Germany. Also the CMS-specific services such as the data transfer tool PhEDEx or the Monte Carlo production are operated by people from different sites in order to spread the knowledge widely and increase the redundancy in terms of operators.

  8. mtDNA-Server: next-generation sequencing data analysis of human mitochondrial DNA in the cloud.

    PubMed

    Weissensteiner, Hansi; Forer, Lukas; Fuchsberger, Christian; Schöpf, Bernd; Kloss-Brandstätter, Anita; Specht, Günther; Kronenberg, Florian; Schönherr, Sebastian

    2016-07-08

    Next generation sequencing (NGS) allows investigating mitochondrial DNA (mtDNA) characteristics such as heteroplasmy (i.e. intra-individual sequence variation) to a higher level of detail. While several pipelines for analyzing heteroplasmies exist, issues in usability, accuracy of results and interpreting final data limit their usage. Here we present mtDNA-Server, a scalable web server for the analysis of mtDNA studies of any size with a special focus on usability as well as reliable identification and quantification of heteroplasmic variants. The mtDNA-Server workflow includes parallel read alignment, heteroplasmy detection, artefact or contamination identification, variant annotation as well as several quality control metrics, often neglected in current mtDNA NGS studies. All computational steps are parallelized with Hadoop MapReduce and executed graphically with Cloudgene. We validated the underlying heteroplasmy and contamination detection model by generating four artificial sample mix-ups on two different NGS devices. Our evaluation data shows that mtDNA-Server detects heteroplasmies and artificial recombinations down to the 1% level with perfect specificity and outperforms existing approaches regarding sensitivity. mtDNA-Server is currently able to analyze the 1000G Phase 3 data (n = 2,504) in less than 5 h and is freely accessible at https://mtdna-server.uibk.ac.at. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Varying the Order in which Positive and Negative Information Is Presented: Effects on Counselors' Judgments of Clients' Mental Health.

    ERIC Educational Resources Information Center

    Pain, Michelle D.; Sharpley, Christopher F.

    1989-01-01

    Studied ratings by Australian counseling psychology graduate students (N=38) and practicing counseling psychologists (N=40) of four hypothetical clients' mental health. Presented "good,""bad," and "neutral" information on each client and asked subjects to rate clients' on Global Assessment Scale. Found bad information…

  10. Web mapping system for complex processing and visualization of environmental geospatial datasets

    NASA Astrophysics Data System (ADS)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial

  11. Impact of exchanges and client-therapist alliance in online-text psychotherapy.

    PubMed

    Reynolds, D'Arcy J; Stiles, William B; Bailer, A John; Hughes, Michael R

    2013-05-01

    The impact of exchanges and client-therapist alliance of online therapy text exchanges were compared to previously published results in face-to-face therapy, and the moderating effects of four participant factors found significant in previously published face-to-face studies were investigated using statistical mixed-effect modeling analytic techniques. Therapists (N=30) and clients (N=30) engaged in online therapy were recruited from private practitioner sites, e-clinics, online counseling centers, and mental-health-related discussion boards. In a naturalistic design, they each visited an online site weekly and completed the standard impact and alliance questionnaires for at least 6 weeks. Results indicated that the impact of exchanges and client-therapist alliance in text therapy was similar to, but in some respects more positive than, previous evaluations of face-to-face therapy. The significance of participant factors previously found to influence impact and alliance in face-to-face therapy (client symptom severity, social support, therapist theoretical orientation, and therapist experience) was not replicated, except that therapists with the more symptomatic clients rated their text exchanges as less smooth and comfortable. Although its small size and naturalistic design impose limitations on sensitivity and generalizability, this study provides some insights into treatment impact and the alliance in online therapy.

  12. Achieving Tier 4 Emissions in Biomass Cookstoves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchese, Anthony; DeFoort, Morgan; Gao, Xinfeng

    Previous literature on top-lit updraft (TLUD) gasifier cookstoves suggested that these stoves have the potential to be the lowest emitting biomass cookstove. However, the previous literature also demonstrated a high degree of variability in TLUD emissions and performance, and a lack of general understanding of the TLUD combustion process. The objective of this study was to improve understanding of the combustion process in TLUD cookstoves. In a TLUD, biomass is gasified and the resulting producer gas is burned in a secondary flame located just above the fuel bed. The goal of this project is to enable the design of amore » more robust TLUD that consistently meets Tier 4 performance targets through a better understanding of the underlying combustion physics. The project featured a combined modeling, experimental and product design/development effort comprised of four different activities: Development of a model of the gasification process in the biomass fuel bed; Development of a CFD model of the secondary combustion zone; Experiments with a modular TLUD test bed to provide information on how stove design, fuel properties, and operating mode influence performance and provide data needed to validate the fuel bed model; Planar laser-induced fluorescence (PLIF) experiments with a two-dimensional optical test bed to provide insight into the flame dynamics in the secondary combustion zone and data to validate the CFD model; Design, development and field testing of a market ready TLUD prototype. Over 180 tests of 40 different configurations of the modular TLUD test bed were performed to demonstrate how stove design, fuel properties and operating mode influences performance, and the conditions under which Tier 4 emissions are obtainable. Images of OH and acetone PLIF were collected at 10 kHz with the optical test bed. The modeling and experimental results informed the design of a TLUD prototype that met Tier 3 to Tier 4 specifications in emissions and Tier 2 in efficiency

  13. You're a What? Process Server

    ERIC Educational Resources Information Center

    Torpey, Elka

    2012-01-01

    In this article, the author talks about the role and functions of a process server. The job of a process server is to hand deliver legal documents to the people involved in court cases. These legal documents range from a summons to appear in court to a subpoena for producing evidence. Process serving can involve risk, as some people take out their…

  14. Tier 2 Team Processes and Decision-Making in a Comprehensive Three-Tiered Model

    ERIC Educational Resources Information Center

    Pool, Juli L.; Carter, Deborah Russell; Johnson, Evelyn S.

    2013-01-01

    Three-tiered models of academic and behavioral support are being increasingly adopted across the nation, and with that adoption has come an increasing message that designing and implementing effective practices alone is not enough. Systems are needed to help staff to collectively implement best practices. These systems, as well as effective…

  15. Pooling the resources of the CMS Tier-1 sites

    DOE PAGES

    Apyan, A.; Badillo, J.; Cruz, J. Diaz; ...

    2015-12-23

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the

  16. Pooling the resources of the CMS Tier-1 sites

    NASA Astrophysics Data System (ADS)

    Apyan, A.; Badillo, J.; Diaz Cruz, J.; Gadrat, S.; Gutsche, O.; Holzman, B.; Lahiff, A.; Magini, N.; Mason, D.; Perez, A.; Stober, F.; Taneja, S.; Taze, M.; Wissing, C.

    2015-12-01

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community. The long shutdown of the LHC in 2013-2014 was an opportunity to revisit this mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems. With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Finally, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape. In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the

  17. Tier II Forms and Instructions

    EPA Pesticide Factsheets

    Facilities must comply with the new requirements on the Tier II emergency and hazardous chemical inventory form starting reporting year 2013, which is due by March 1, 2014. Some states may have specific requirements for reporting and submission.

  18. Effects of Client Violence on Social Work Students: A National Study

    ERIC Educational Resources Information Center

    Criss, Pam

    2010-01-01

    This study uses a work stress theoretical framework to examine the effects of direct and indirect client violence on a randomly selected national sample of MSW and BSW social work students from the National Association of Social Workers (N=595). Client violence variables were analyzed in relationship to fear of future violence and occupational…

  19. GRAMM-X public web server for protein–protein docking

    PubMed Central

    Tovchigrechko, Andrey; Vakser, Ilya A.

    2006-01-01

    Protein docking software GRAMM-X and its web interface () extend the original GRAMM Fast Fourier Transformation methodology by employing smoothed potentials, refinement stage, and knowledge-based scoring. The web server frees users from complex installation of database-dependent parallel software and maintaining large hardware resources needed for protein docking simulations. Docking problems submitted to GRAMM-X server are processed by a 320 processor Linux cluster. The server was extensively tested by benchmarking, several months of public use, and participation in the CAPRI server track. PMID:16845016

  20. Counselor Responsiveness to Client Religiousness.

    ERIC Educational Resources Information Center

    Kelly, Eugene W., Jr.

    1990-01-01

    Presents eight categories of client attitudes toward religion and suggests opportunities for religiously oriented counselor responses. Uses four categories to describes how religion may be associated with specific client issues. Contends that an informed appreciation of clients' religiousness and the religious dimensions of many client issues can…

  1. Institutional policy changes aimed at addressing obesity among mental health clients.

    PubMed

    Knol, Linda L; Pritchett, Kelly; Dunkin, Jeri

    2010-05-01

    People with mental illness often experience unique barriers to healthy eating and physical activity. For these clients, interventions should focus on changes in the immediate environment to change behaviors. The purpose of this project was to implement and evaluate policy changes that would limit calorie intake and increase calorie expenditure of clients receiving mental health services. This intervention was implemented in a rural mental health system in the southeastern United States. Clients live in small group homes, where they are served breakfast, dinner, and a snack, and attend outpatient day treatment programs, where they are served lunch and can purchase snacks from vending machines. This intervention included institutional policy changes that altered menus and vending machine options and implemented group walking programs. Primary outcome measures were changes in clients' weight at 3 and 6 months after policy implementation. At the 3-month follow-up, the median weight loss for overweight/obese clients (n = 45) was 1.4 kg. The 33 overweight/obese clients who were still in the group homes at the 6-month follow-up either maintained or continued to lose weight. Institutional policy changes aimed at improving dietary intake and physical activity levels among clients receiving mental health services can promote weight loss in overweight clients.

  2. 40 CFR 158.510 - Tiered testing options for nonfood pesticides.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Tiered testing options for nonfood pesticides. 158.510 Section 158.510 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Toxicology § 158.510 Tiered testing options for nonfood...

  3. Professional development to differentiate kindergarten Tier 1 instruction: Can already effective teachers improve student outcomes by differentiating Tier 1 instruction?

    PubMed Central

    Otaiba, Stephanie Al; Folsom, Jessica S.; Wanzek, Jeannie; Greulich, Luana; Wasche, Jessica; Schatschneider, Christopher; Connor, Carol

    2015-01-01

    Two primary purposes guided this quasi-experimental within-teacher study: (1) to examine changes from baseline through two years of professional development (Individualizing Student Instruction) in kindergarten teachers’ differentiation of Tier 1 literacy instruction; (2) to examine changes in reading and vocabulary of three cohorts of the teachers’ students (n = 416). Teachers’ instruction was observed and students were assessed on standardized measures of vocabulary and word reading. Results suggested that teachers significantly increased their differentiation and students showed significantly greater word reading outcomes relative to baseline. No change was observed for vocabulary. Results have implications for supporting teacher effectiveness through technology-supported professional development. PMID:27346927

  4. Professional development to differentiate kindergarten Tier 1 instruction: Can already effective teachers improve student outcomes by differentiating Tier 1 instruction?

    PubMed

    Otaiba, Stephanie Al; Folsom, Jessica S; Wanzek, Jeannie; Greulich, Luana; Wasche, Jessica; Schatschneider, Christopher; Connor, Carol

    Two primary purposes guided this quasi-experimental within-teacher study: (1) to examine changes from baseline through two years of professional development (Individualizing Student Instruction) in kindergarten teachers' differentiation of Tier 1 literacy instruction; (2) to examine changes in reading and vocabulary of three cohorts of the teachers' students ( n = 416). Teachers' instruction was observed and students were assessed on standardized measures of vocabulary and word reading. Results suggested that teachers significantly increased their differentiation and students showed significantly greater word reading outcomes relative to baseline. No change was observed for vocabulary. Results have implications for supporting teacher effectiveness through technology-supported professional development.

  5. Integrating Model-Based Transmission Reduction into a multi-tier architecture

    NASA Astrophysics Data System (ADS)

    Straub, J.

    A multi-tier architecture consists of numerous craft as part of the system, orbital, aerial, and surface tiers. Each tier is able to collect progressively greater levels of information. Generally, craft from lower-level tiers are deployed to a target of interest based on its identification by a higher-level craft. While the architecture promotes significant amounts of science being performed in parallel, this may overwhelm the computational and transmission capabilities of higher-tier craft and links (particularly the deep space link back to Earth). Because of this, a new paradigm in in-situ data processing is required. Model-based transmission reduction (MBTR) is such a paradigm. Under MBTR, each node (whether a single spacecraft in orbit of the Earth or another planet or a member of a multi-tier network) is given an a priori model of the phenomenon that it is assigned to study. It performs activities to validate this model. If the model is found to be erroneous, corrective changes are identified, assessed to ensure their significance for being passed on, and prioritized for transmission. A limited amount of verification data is sent with each MBTR assertion message to allow those that might rely on the data to validate the correct operation of the spacecraft and MBTR engine onboard. Integrating MBTR with a multi-tier framework creates an MBTR hierarchy. Higher levels of the MBTR hierarchy task lower levels with data collection and assessment tasks that are required to validate or correct elements of its model. A model of the expected conditions is sent to the lower level craft; which then engages its own MBTR engine to validate or correct the model. This may include tasking a yet lower level of craft to perform activities. When the MBTR engine at a given level receives all of its component data (whether directly collected or from delegation), it randomly chooses some to validate (by reprocessing the validation data), performs analysis and sends its own results (v

  6. Progress in landslide susceptibility mapping over Europe using Tier-based approaches

    NASA Astrophysics Data System (ADS)

    Günther, Andreas; Hervás, Javier; Reichenbach, Paola; Malet, Jean-Philippe

    2010-05-01

    The European Thematic Strategy for Soil Protection aims, among other objectives, to ensure a sustainable use of soil. The legal instrument of the strategy, the proposed Framework Directive, suggests identifying priority areas of several soil threats including landslides using a coherent and compatible approach based on the use of common thematic data. In a first stage, this can be achieved through landslide susceptibility mapping using geographically nested, multi-step tiered approaches, where areas identified as of high susceptibility by a first, synoptic-scale Tier ("Tier 1") can then be further assessed and mapped at larger scale by successive Tiers. In order to identify areas prone to landslides at European scale ("Tier 1"), a number of thematic terrain and environmental data sets already available for the whole of Europe can be used as input for a continental scale susceptibility model. However, since no coherent landslide inventory data is available at the moment over the whole continent, qualitative heuristic zonation approaches are proposed. For "Tier 1" a preliminary, simplified model has been developed. It consists of an equally weighting combination of a reduced, continent-wide common dataset of landslide conditioning factors including soil parent material, slope angle and land cover, to derive a landslide susceptibility index using raster mapping units consisting of 1 x 1 km pixels. A preliminary European-wide susceptibility map has thus been produced at 1:1 Million scale, since this is compatible with that of the datasets used. The map has been validated by means of a ratio of effectiveness using samples from landslide inventories in Italy, Austria, Hungary and United Kingdom. Although not differentiated for specific geomorphological environments or specific landslide types, the experimental model reveals a relatively good performance in many European regions at a 1:1 Million scale. An additional "Tier 1" susceptibility map at the same scale and using

  7. MODBUS APPLICATION AT JEFFERSON LAB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Jianxun; Seaton, Chad; Philip, Sarin

    Modbus is a client/server communication model. In our applications, the embedded Ethernet device XPort is designed as the server and a SoftIOC running EPICS Modbus is the client. The SoftIOC builds a Modbus request from parameter contained in a demand that is sent by the EPICS application to the Modbus Client interface. On reception of the Modbus request, the Modbus server activates a local action to read, write, or achieve some other action. So, the main Modbus server functions are to wait for a Modbus request on 502 TCP port, treat this request, and then build a Modbus response.

  8. Electronic Reference Library: Silverplatter's Database Networking Solution.

    ERIC Educational Resources Information Center

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  9. Interactive Machine Learning at Scale with CHISSL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin L.; Grace, Emily A.; Volkova, Svitlana

    We demonstrate CHISSL, a scalable client-server system for real-time interactive machine learning. Our system is capa- ble of incorporating user feedback incrementally and imme- diately without a structured or pre-defined prediction task. Computation is partitioned between a lightweight web-client and a heavyweight server. The server relies on representation learning and agglomerative clustering to learn a dendrogram, a hierarchical approximation of a representation space. The client uses only this dendrogram to incorporate user feedback into the model via transduction. Distances and predictions for each unlabeled instance are updated incrementally and deter- ministically, with O(n) space and time complexity. Our al- gorithmmore » is implemented in a functional prototype, designed to be easy to use by non-experts. The prototype organizes the large amounts of data into recommendations. This allows the user to interact with actual instances by dragging and drop- ping to provide feedback in an intuitive manner. We applied CHISSL to several domains including cyber, social media, and geo-temporal analysis.« less

  10. 7 CFR 1794.16 - Tiering.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... where better decision making will be fostered (40 CFR 1502.20). ... 7 Agriculture 12 2010-01-01 2010-01-01 false Tiering. 1794.16 Section 1794.16 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE...

  11. Interactive client side data visualization with d3.js

    NASA Astrophysics Data System (ADS)

    Rodzianko, A.; Versteeg, R.; Johnson, D. V.; Soltanian, M. R.; Versteeg, O. J.; Girouard, M.

    2015-12-01

    Geoscience data associated with near surface research and operational sites is increasingly voluminous and heterogeneous (both in terms of providers and data types - e.g. geochemical, hydrological, geophysical, modeling data, of varying spatiotemporal characteristics). Such data allows scientists to investigate fundamental hydrological and geochemical processes relevant to agriculture, water resources and climate change. For scientists to easily share, model and interpret such data requires novel tools with capabilities for interactive data visualization. Under sponsorship of the US Department of Energy, Subsurface Insights is developing the Predictive Assimilative Framework (PAF): a cloud based subsurface monitoring platform which can manage, process and visualize large heterogeneous datasets. Over the last year we transitioned our visualization method from a server side approach (in which images and animations were generated using Jfreechart and Visit) to a client side one that utilizes the D3 Javascript library. Datasets are retrieved using web service calls to the server, returned as JSON objects and visualized within the browser. Users can interactively explore primary and secondary datasets from various field locations. Our current capabilities include interactive data contouring and heterogeneous time series data visualization. While this approach is very powerful and not necessarily unique, special attention needs to be paid to latency and responsiveness issues as well as to issues as cross browser code compatibility so that users have an identical, fluid and frustration-free experience across different computational platforms. We gratefully acknowledge support from the US Department of Energy under SBIR Award DOE DE-SC0009732, the use of data from the Lawrence Berkeley National Laboratory (LBNL) Sustainable Systems SFA Rifle field site and collaboration with LBNL SFA scientists.

  12. Distributed Digital Survey Logbook Built on GeoServer and PostGIS

    NASA Astrophysics Data System (ADS)

    Jovicic, Aleksandar; Castelli, Ana; Kljajic, Zoran

    2013-04-01

    Keeping tracks of events that happens during survey (e.g. position and time when instruments goes into the water or come on-board, depths from which samples are taken or notes about equipment malfunctions and repairs) is essential for efficient post-processing and quality control of collected data especially in case of suspicious measurements. Most scientists still using good-old-paper way for such tasks and later transform it into digital form using spreadsheet applications. This approach looks more "safe" (if person is not confident in their computer skills) but in reality it turns to be more error-prone (especially when it comes to position recording and variations of sexagesimal representations or if there are no hints which timezone was used for time recording). As cruises usually involves various teams not always interested to do own measurements at each station, keeping eye on current position is essential, especially if cruise plan is changed (due to bad weather or discovering of some underwater features that requires more attention than originally planned). Also, position is usually displayed only at one monitor (as most GPS receivers provide just serial connectivity and distribution of such signal to multiple clients requires some devices non-wide-spread on computer equipment market) so it can make messy situation in control room when everybody try to write-down current position and time. To overcome all mentioned obstacles Distributed Digital Surevey Logbook is implemented. It is built on Open Geospatial Consortium (OGC) compliant GeoServer, using PostGIS database. It can handle geospatial content (charts and cruise plans), do recording of vessel track and all kind of events that any member of team want to record. As GeoServer allows distribution of position data to unlimited number of clients (from traditional PC's and laptops to tablets and smartphones), it can decrease pressure on control room no matter if all features are used or just as distant

  13. Web Service Distributed Management Framework for Autonomic Server Virtualization

    NASA Astrophysics Data System (ADS)

    Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea

    Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.

  14. IPG Job Manager v2.0 Design Documentation

    NASA Technical Reports Server (NTRS)

    Hu, Chaumin

    2003-01-01

    This viewgraph presentation provides a high-level design of the IPG Job Manager, and satisfies its Master Requirement Specification v2.0 Revision 1.0, 01/29/2003. The presentation includes a Software Architecture/Functional Overview with the following: Job Model; Job Manager Client/Server Architecture; Job Manager Client (Job Manager Client Class Diagram and Job Manager Client Activity Diagram); Job Manager Server (Job Manager Client Class Diagram and Job Manager Client Activity Diagram); Development Environment; Project Plan; Requirement Traceability.

  15. Employment-related information for clients receiving mental health services and clinicians.

    PubMed

    King, Joanne; Cleary, Catherine; Harris, Meredith G; Lloyd, Chris; Waghorn, Geoff

    2011-01-01

    Clients receiving public mental health services and clinicians require information to facilitate client access to suitable employment services. However, little is known about the specific employment-related information needs of these groups. This study aimed to identify employment-related information needs among clients, clinicians and employment specialists, with a view to developing a new vocational information resource. Employment-related information needs were identified via a series of focus group consultations with clients, clinicians, and employment specialists (n=23). Focus group discussions were guided by a common semi-structured interview schedule. Several categories of information need were identified: countering incorrect beliefs about work; benefits of work; disclosure and managing personal information; impact of earnings on welfare entitlements; employment service pathways; job preparation, planning and selection; and managing illness once working. Clear preferences were expressed about effective means of communicating the key messages in written material. This investigation confirmed the need for information tailored to clients and clinicians in order to activate clients' employment journey and to help them make informed decisions about vocational assistance.

  16. Interfaces for Distributed Systems of Information Servers.

    ERIC Educational Resources Information Center

    Kahle, Brewster; And Others

    1992-01-01

    Describes two systems--Wide Area Information Servers (WAIS) and Rosebud--that provide protocol-based mechanisms for accessing remote full-text information servers. Design constraints, human interface design, and implementation are examined for five interfaces to these systems developed to run on the Macintosh or Unix terminals. Sample screen…

  17. XENOENDOCRINE DISRUPTERS-TIERED SCREENING AND TESTING: FILLING KEY DATA GAPS

    EPA Science Inventory

    The US Environmental Protection Agency (EPA) is developing a screening and testing program for endocrine disrupting chemicals (EDCs). High priority chemicals would be evaluated in the Tier 1 Screening (T1S) battery. Chemicals positive in T1S would then be tested (Tier 2). T1S...

  18. Self-Regulated Strategy Development as a Tier 2 Writing Intervention

    ERIC Educational Resources Information Center

    Johnson, Evelyn S.; Hancock, Christine; Carter, Deborah R.; Pool, Juli L.

    2013-01-01

    In a response to intervention framework, the implication of limited writing instruction suggests an immediate need for Tier 2 interventions to support struggling writers while at the same time addressing instructional gaps in Tier 1. Many schools struggle with implementing writing intervention, partly because of the limited number of…

  19. Smoking and its treatment in addiction services: clients' and staff behaviour and attitudes.

    PubMed

    Cookson, Camilla; Strang, John; Ratschen, Elena; Sutherland, Gay; Finch, Emily; McNeill, Ann

    2014-07-14

    High smoking prevalence has been observed among those misusing other substances. This study aimed to establish smoking behaviours and attitudes towards nicotine dependence treatment among clients and staff in substance abuse treatment settings. Cross-sectional questionnaire survey of staff and clients in a convenience sample of seven community and residential addiction services in, or with links to, Europe's largest provider of mental health care, the South London and Maudsley NHS Foundation Trust. Survey items assessed smoking behaviour, motivation to quit, receipt of and attitudes towards nicotine dependence treatment. Eighty five percent (n = 163) and 97% (n = 145) response rates of clients and staff were achieved. A high smoking prevalence was observed in clients (88%) and staff (45%); of current smokers, nearly all clients were daily smokers, while 42% of staff were occasional smokers. Despite 79% of clients who smoked expressing a desire to quit and 46% interested in receiving advice, only 15% had been offered support to stop smoking during their current treatment episode with 56% reported never having been offered support. Staff rated smoking treatment significantly less important than treatment of other substances (p < 0.001), and only 29% of staff thought it should be addressed early in a client's primary addiction treatment, compared with 48% of clients. A large unmet clinical need is evident with a widespread failure to deliver smoking cessation interventions to an extraordinarily high prevalence population of smokers in addiction services. This is despite the majority of smokers reporting motivation to quit. Staff smoking and attitudes may be a contributory factor in these findings.

  20. Prospective Environmental Risk Assessment for Sediment-Bound Organic Chemicals: A Proposal for Tiered Effect Assessment.

    PubMed

    Diepens, Noël J; Koelmans, Albert A; Baveco, Hans; van den Brink, Paul J; van den Heuvel-Greve, Martine J; Brock, Theo C M

    A broadly accepted framework for prospective environmental risk assessment (ERA) of sediment-bound organic chemicals is currently lacking. Such a framework requires clear protection goals, evidence-based concepts that link exposure to effects and a transparent tiered-effect assessment. In this paper, we provide a tiered prospective sediment ERA procedure for organic chemicals in sediment, with a focus on the applicable European regulations and the underlying data requirements. Using the ecosystem services concept, we derived specific protection goals for ecosystem service providing units: microorganisms, benthic algae, sediment-rooted macrophytes, benthic invertebrates and benthic vertebrates. Triggers for sediment toxicity testing are discussed.We recommend a tiered approach (Tier 0 through Tier 3). Tier-0 is a cost-effective screening based on chronic water-exposure toxicity data for pelagic species and equilibrium partitioning. Tier-1 is based on spiked sediment laboratory toxicity tests with standard benthic test species and standardised test methods. If comparable chronic toxicity data for both standard and additional benthic test species are available, the Species Sensitivity Distribution (SSD) approach is a more viable Tier-2 option than the geometric mean approach. This paper includes criteria for accepting results of sediment-spiked single species toxicity tests in prospective ERA, and for the application of the SSD approach. We propose micro/mesocosm experiments with spiked sediment, to study colonisation success by benthic organisms, as a Tier-3 option. Ecological effect models can be used to supplement the experimental tiers. A strategy for unifying information from various tiers by experimental work and exposure-and effect modelling is provided.

  1. A Responsive Tier 2 Process for a Middle School Student with Behavior Problems

    ERIC Educational Resources Information Center

    McDaniel, Sara C.; Bruhn, Allison L.; Mitchell, Barbara S.

    2017-01-01

    Students requiring Tier 2 behavioral supports frequently display behavioral deficits in multiple domains (e.g., emotional symptoms and peer problems). The Tier 2 framework developed by McDaniel, Bruhn, & Mitchell (2015a) is a responsive structure for identifying and intervening at Tier 2. This process is described with a practical case example…

  2. Towards a theory of tiered testing.

    PubMed

    Hansson, Sven Ove; Rudén, Christina

    2007-06-01

    Tiered testing is an essential part of any resource-efficient strategy for the toxicity testing of a large number of chemicals, which is required for instance in the risk management of general (industrial) chemicals, In spite of this, no general theory seems to be available for the combination of single tests into efficient tiered testing systems. A first outline of such a theory is developed. It is argued that chemical, toxicological, and decision-theoretical knowledge should be combined in the construction of such a theory. A decision-theoretical approach for the optimization of test systems is introduced. It is based on expected utility maximization with simplified assumptions covering factual and value-related information that is usually missing in the development of test systems.

  3. Patterns of client behavior with their most recent male escort: an application of latent class analysis.

    PubMed

    Grov, Christian; Starks, Tyrel J; Wolff, Margaret; Smith, Michael D; Koken, Juline A; Parsons, Jeffrey T

    2015-05-01

    Research examining interactions between male escorts and clients has relied heavily on data from escorts, men working on the street, and behavioral data aggregated over time. In the current study, 495 clients of male escorts answered questions about sexual behavior with their last hire. Latent class analysis identified four client sets based on these variables. The largest (n = 200, 40.4 %, labeled Typical Escort Encounter) included men endorsing behavior prior research found typical of paid encounters (e.g., oral sex and kissing). The second largest class (n = 157, 31.7 %, Typical Escort Encounter + Erotic Touching) included men reporting similar behaviors, but with greater variety along a spectrum of touching (e.g., mutual masturbation and body worship). Those classed BD/SM and Kink (n = 76, 15.4 %) reported activity along the kink spectrum (BD/SM and role play). Finally, men classed Erotic Massage Encounters (n = 58, 11.7 %) primarily engaged in erotic touch. Clients reporting condomless anal sex were in the minority (12.2 % overall). Escorts who engage in anal sex with clients might be appropriate to train in HIV prevention and other harm reduction practices-adopting the perspective of "sex workers as sex educators."

  4. Design and implementation of streaming media server cluster based on FFMpeg.

    PubMed

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.

  5. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg

    PubMed Central

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187

  6. 78 FR 71039 - Publication of the Tier 2 Tax Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ...Publication of the tier 2 tax rates for calendar year 2014 as required by section 3241(d) of the Internal Revenue Code (26 U.S.C. section 3241). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding for benefits under the Railroad Retirement Act.

  7. 7 CFR 1710.114 - TIER, DSC, OTIER and ODSC requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false TIER, DSC, OTIER and ODSC requirements. 1710.114... AND GUARANTEES Loan Purposes and Basic Policies § 1710.114 TIER, DSC, OTIER and ODSC requirements. (a) General. Requirements for coverage ratios are set forth in the borrower's mortgage, loan contract, or...

  8. 77 FR 71481 - Publication of the Tier 2 Tax Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-30

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY... tax rates for calendar year 2013 as required by section 3241(d) of the Internal Revenue Code (26 U.S.C. 3241). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of...

  9. Ethnic matching of clients and clinicians and use of mental health services by ethnic minority clients.

    PubMed

    Ziguras, Stephen; Klimidis, Steven; Lewis, James; Stuart, Geoff

    2003-04-01

    Research in the United States has indicated that matching clients from a minority group with clinicians from the same ethnic background increases use of community mental health services and reduces use of emergency services. This study assessed the effects of matching clients from a non-English-speaking background with bilingual, bicultural clinicians in a mental health system in Australia that emphasizes community-based psychiatric case management. In an overall sample of 2,935 clients served in the western region of Melbourne from 1997 to 1999, ethnic minority clients from a non-English-speaking background who received services from a bilingual, bicultural case manager were compared with ethnic minority clients who did not receive such services and with clients from an English-speaking background. The clients' engagement with three types of services-community care teams, psychiatric crisis teams, and psychiatric inpatient services-was assessed. Compared with ethnic minority clients who were not matched with a bilingual clinician, those who were matched generally had a longer duration and greater frequency of contact with community care teams and a shorter duration and lower frequency of contact with crisis teams. Clients born in Vietnam who were matched with a bilingual clinician had a shorter annual mean length of hospital stay and a lower annual mean frequency of hospital admission than Australian-born clients. The benefits of matching clients with psychiatric case managers on the basis of ethnic background include a lower level of need for crisis intervention and, for clients from some ethnic groups, fewer inpatient interventions. These Australian results support findings of the effectiveness of client-clinician ethnic matching in the United States.

  10. WebCN: A web-based computation tool for in situ-produced cosmogenic nuclides

    NASA Astrophysics Data System (ADS)

    Ma, Xiuzeng; Li, Yingkui; Bourgeois, Mike; Caffee, Marc; Elmore, David; Granger, Darryl; Muzikar, Paul; Smith, Preston

    2007-06-01

    Cosmogenic nuclide techniques are increasingly being utilized in geoscience research. For this it is critical to establish an effective, easily accessible and well defined tool for cosmogenic nuclide computations. We have been developing a web-based tool (WebCN) to calculate surface exposure ages and erosion rates based on the nuclide concentrations measured by the accelerator mass spectrometry. WebCN for 10Be and 26Al has been finished and published at http://www.physics.purdue.edu/primelab/for_users/rockage.html. WebCN for 36Cl is under construction. WebCN is designed as a three-tier client/server model and uses the open source PostgreSQL for the database management and PHP for the interface design and calculations. On the client side, an internet browser and Microsoft Access are used as application interfaces to access the system. Open Database Connectivity is used to link PostgreSQL and Microsoft Access. WebCN accounts for both spatial and temporal distributions of the cosmic ray flux to calculate the production rates of in situ-produced cosmogenic nuclides at the Earth's surface.

  11. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.

    PubMed

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-07-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Elicitation of Robust Tier 2 Neutralizing Antibody Responses in Nonhuman Primates by HIV Envelope Trimer Immunization Using Optimized Approaches.

    PubMed

    Pauthner, Matthias; Havenar-Daughton, Colin; Sok, Devin; Nkolola, Joseph P; Bastidas, Raiza; Boopathy, Archana V; Carnathan, Diane G; Chandrashekar, Abishek; Cirelli, Kimberly M; Cottrell, Christopher A; Eroshkin, Alexey M; Guenaga, Javier; Kaushik, Kirti; Kulp, Daniel W; Liu, Jinyan; McCoy, Laura E; Oom, Aaron L; Ozorowski, Gabriel; Post, Kai W; Sharma, Shailendra K; Steichen, Jon M; de Taeye, Steven W; Tokatlian, Talar; Torrents de la Peña, Alba; Butera, Salvatore T; LaBranche, Celia C; Montefiori, David C; Silvestri, Guido; Wilson, Ian A; Irvine, Darrell J; Sanders, Rogier W; Schief, William R; Ward, Andrew B; Wyatt, Richard T; Barouch, Dan H; Crotty, Shane; Burton, Dennis R

    2017-06-20

    The development of stabilized recombinant HIV envelope trimers that mimic the virion surface molecule has increased enthusiasm for a neutralizing antibody (nAb)-based HIV vaccine. However, there is limited experience with recombinant trimers as immunogens in nonhuman primates, which are typically used as a model for humans. Here, we tested multiple immunogens and immunization strategies head-to-head to determine their impact on the quantity, quality, and kinetics of autologous tier 2 nAb development. A bilateral, adjuvanted, subcutaneous immunization protocol induced reproducible tier 2 nAb responses after only two immunizations 8 weeks apart, and these were further enhanced by a third immunization with BG505 SOSIP trimer. We identified immunogens that minimized non-neutralizing V3 responses and demonstrated that continuous immunogen delivery could enhance nAb responses. nAb responses were strongly associated with germinal center reactions, as assessed by lymph node fine needle aspiration. This study provides a framework for preclinical and clinical vaccine studies targeting nAb elicitation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  13. A socio-technical critique of tiered services: implications for interprofessional care.

    PubMed

    Hood, Rick

    2015-01-01

    In the health and social care sector, tiered services have become an increasingly influential way of organising professional expertise to address the needs of vulnerable people. Drawing on its application to UK child welfare services, this paper discusses the merits of the tiered model from a socio-technical perspective - an approach that has emerged from the fields of accident analysis and systems design. The main elements of a socio-technical critique are outlined and used to explore how tiered services provide support to families and prevent harm to children. Attention is drawn to the distribution of expertise and resources in a tiered system, and to the role of referral and gate-keeping procedures in dispersing accountability for outcomes. An argument is made for designing systems "against demand", and the paper concludes by discussing some alternative models of multi-agency provision.

  14. Dscam1 web server: online prediction of Dscam1 self- and hetero-affinity.

    PubMed

    Marini, Simone; Nazzicari, Nelson; Biscarini, Filippo; Wang, Guang-Zhong

    2017-06-15

    Formation of homodimers by identical Dscam1 protein isomers on cell surface is the key factor for the self-avoidance of growing neurites. Dscam1 immense diversity has a critical role in the formation of arthropod neuronal circuit, showing unique evolutionary properties when compared to other cell surface proteins. Experimental measures are available for 89 self-binding and 1722 hetero-binding protein samples, out of more than 19 thousands (self-binding) and 350 millions (hetero-binding) possible isomer combinations. We developed Dscam1 Web Server to quickly predict Dscam1 self- and hetero- binding affinity for batches of Dscam1 isomers. The server can help the study of Dscam1 affinity and help researchers navigate through the tens of millions of possible isomer combinations to isolate the strong-binding ones. Dscam1 Web Server is freely available at: http://bioinformatics.tecnoparco.org/Dscam1-webserver . Web server code is available at https://gitlab.com/ne1s0n/Dscam1-binding . simone.marini@unipv.it or guangzhong.wang@picb.ac.cn. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  15. 47 CFR 76.921 - Buy-through of other tiers prohibited.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Buy-through of other tiers prohibited. 76.921... video programming offered on a per channel or per program charge basis. A cable operator may, however, require the subscription to one or more tiers of cable programming services as a condition of access to...

  16. Using Brief Experimental Analysis to Intensify Tier 3 Reading Interventions

    ERIC Educational Resources Information Center

    Coolong-Chaffin, Melissa; Wagner, Dana

    2015-01-01

    As implementation of multi-tiered systems of support becomes common practice across the nation, practitioners continue to need strategies for intensifying interventions and supports for the subset of students who fail to make adequate progress despite strong programs at Tiers 1 and 2. Experts recommend making several changes to the structure and…

  17. Security mechanism based on Hospital Authentication Server for secure application of implantable medical devices.

    PubMed

    Park, Chang-Seop

    2014-01-01

    After two recent security attacks against implantable medical devices (IMDs) have been reported, the privacy and security risks of IMDs have been widely recognized in the medical device market and research community, since the malfunctioning of IMDs might endanger the patient's life. During the last few years, a lot of researches have been carried out to address the security-related issues of IMDs, including privacy, safety, and accessibility issues. A physician accesses IMD through an external device called a programmer, for diagnosis and treatment. Hence, cryptographic key management between IMD and programmer is important to enforce a strict access control. In this paper, a new security architecture for the security of IMDs is proposed, based on a 3-Tier security model, where the programmer interacts with a Hospital Authentication Server, to get permissions to access IMDs. The proposed security architecture greatly simplifies the key management between IMDs and programmers. Also proposed is a security mechanism to guarantee the authenticity of the patient data collected from IMD and the nonrepudiation of the physician's treatment based on it. The proposed architecture and mechanism are analyzed and compared with several previous works, in terms of security and performance.

  18. Security Mechanism Based on Hospital Authentication Server for Secure Application of Implantable Medical Devices

    PubMed Central

    2014-01-01

    After two recent security attacks against implantable medical devices (IMDs) have been reported, the privacy and security risks of IMDs have been widely recognized in the medical device market and research community, since the malfunctioning of IMDs might endanger the patient's life. During the last few years, a lot of researches have been carried out to address the security-related issues of IMDs, including privacy, safety, and accessibility issues. A physician accesses IMD through an external device called a programmer, for diagnosis and treatment. Hence, cryptographic key management between IMD and programmer is important to enforce a strict access control. In this paper, a new security architecture for the security of IMDs is proposed, based on a 3-Tier security model, where the programmer interacts with a Hospital Authentication Server, to get permissions to access IMDs. The proposed security architecture greatly simplifies the key management between IMDs and programmers. Also proposed is a security mechanism to guarantee the authenticity of the patient data collected from IMD and the nonrepudiation of the physician's treatment based on it. The proposed architecture and mechanism are analyzed and compared with several previous works, in terms of security and performance. PMID:25276797

  19. LiveBench-1: continuous benchmarking of protein structure prediction servers.

    PubMed

    Bujnicki, J M; Elofsson, A; Fischer, D; Rychlewski, L

    2001-02-01

    We present a novel, continuous approach aimed at the large-scale assessment of the performance of available fold-recognition servers. Six popular servers were investigated: PDB-Blast, FFAS, T98-lib, GenTHREADER, 3D-PSSM, and INBGU. The assessment was conducted using as prediction targets a large number of selected protein structures released from October 1999 to April 2000. A target was selected if its sequence showed no significant similarity to any of the proteins previously available in the structural database. Overall, the servers were able to produce structurally similar models for one-half of the targets, but significantly accurate sequence-structure alignments were produced for only one-third of the targets. We further classified the targets into two sets: easy and hard. We found that all servers were able to find the correct answer for the vast majority of the easy targets if a structurally similar fold was present in the server's fold libraries. However, among the hard targets--where standard methods such as PSI-BLAST fail--the most sensitive fold-recognition servers were able to produce similar models for only 40% of the cases, half of which had a significantly accurate sequence-structure alignment. Among the hard targets, the presence of updated libraries appeared to be less critical for the ranking. An "ideally combined consensus" prediction, where the results of all servers are considered, would increase the percentage of correct assignments by 50%. Each server had a number of cases with a correct assignment, where the assignments of all the other servers were wrong. This emphasizes the benefits of considering more than one server in difficult prediction tasks. The LiveBench program (http://BioInfo.PL/LiveBench) is being continued, and all interested developers are cordially invited to join.

  20. Interrater Agreement on the Visual Analysis of Individual Tiers and Functional Relations in Multiple Baseline Designs.

    PubMed

    Wolfe, Katie; Seaman, Michael A; Drasgow, Erik

    2016-11-01

    Previous research on visual analysis has reported low levels of interrater agreement. However, many of these studies have methodological limitations (e.g., use of AB designs, undefined judgment task) that may have negatively influenced agreement. Our primary purpose was to evaluate whether agreement would be higher than previously reported if we addressed these weaknesses. Our secondary purposes were to investigate agreement at the tier level (i.e., the AB comparison) and at the functional relation level in multiple baseline designs and to examine the relationship between raters' decisions at each of these levels. We asked experts (N = 52) to make judgments about changes in the dependent variable in individual tiers and about the presence of an overall functional relation in 31 multiple baseline graphs. Our results indicate that interrater agreement was just at or just below minimally adequate levels for both types of decisions and that agreement at the individual tier level often resulted in agreement about the overall functional relation. We report additional findings and discuss implications for practice and future research. © The Author(s) 2016.

  1. ProteMiner-SSM: a web server for efficient analysis of similar protein tertiary substructures.

    PubMed

    Chang, Darby Tien-Hau; Chen, Chien-Yu; Chung, Wen-Chin; Oyang, Yen-Jen; Juan, Hsueh-Fen; Huang, Hsuan-Cheng

    2004-07-01

    Analysis of protein-ligand interactions is a fundamental issue in drug design. As the detailed and accurate analysis of protein-ligand interactions involves calculation of binding free energy based on thermodynamics and even quantum mechanics, which is highly expensive in terms of computing time, conformational and structural analysis of proteins and ligands has been widely employed as a screening process in computer-aided drug design. In this paper, a web server called ProteMiner-SSM designed for efficient analysis of similar protein tertiary substructures is presented. In one experiment reported in this paper, the web server has been exploited to obtain some clues about a biochemical hypothesis. The main distinction in the software design of the web server is the filtering process incorporated to expedite the analysis. The filtering process extracts the residues located in the caves of the protein tertiary structure for analysis and operates with O(nlogn) time complexity, where n is the number of residues in the protein. In comparison, the alpha-hull algorithm, which is a widely used algorithm in computer graphics for identifying those instances that are on the contour of a three-dimensional object, features O(n2) time complexity. Experimental results show that the filtering process presented in this paper is able to speed up the analysis by a factor ranging from 3.15 to 9.37 times. The ProteMiner-SSM web server can be found at http://proteminer.csie.ntu.edu.tw/. There is a mirror site at http://p4.sbl.bc.sinica.edu.tw/proteminer/.

  2. Relationships between public health nurse-delivered physical activity interventions and client physical activity behavior.

    PubMed

    Olsen, Jeanette M; Horning, Melissa L; Thorson, Diane; Monsen, Karen A

    2018-04-01

    The purpose of this study was to identify physical activity interventions delivered by public health nurses (PHNs) and examine their association with physical activity behavior change among adult clients. Physical activity is a public health priority, yet little is known about nurse-delivered physical activity interventions in day-to-day practice or their outcomes. This quantitative retrospective evaluation examined de-identified electronic-health-record data. Adult clients with at least two Omaha System Physical activity Knowledge, Behavior, and Status (KBS) ratings documented by PHNs between October 2010-June 2016 (N=419) were included. Omaha System baseline and follow-up Physical activity KBS ratings, interventions, and demographics were examined. Younger clients typically receiving maternal-child/family services were more likely to receive interventions than older clients (p<0.001). A total of 2869 Physical activity interventions were documented among 197 clients. Most were from categories of Teaching, Guidance, Counseling (n=1639) or Surveillance (n=1183). Few were Case Management (n=46). Hierarchical regression modeling explained 15.4% of the variance for change in Physical activity Behavior rating with significant influence from intervention dose (p=0.03) and change in Physical activity Knowledge (p<0.001). This study identified and described physical activity interventions delivered by PHNs. Implementation of department-wide policy requiring documentation of Physical activity assessment for all clients enabled the evaluation. A higher dose of physical activity interventions and increased Physical activity knowledge were associated with increased Physical activity Behavior. More research is needed to identify factors influencing who receives interventions and how interventions are selected. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Using the Domain Name System to Thwart Automated Client-Based Attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Curtis R; Shue, Craig A

    2011-09-01

    On the Internet, attackers can compromise systems owned by other people and then use these systems to launch attacks automatically. When attacks such as phishing or SQL injections are successful, they can have negative consequences including server downtime and the loss of sensitive information. Current methods to prevent such attacks are limited in that they are application-specific, or fail to block attackers. Phishing attempts can be stopped with email filters, but if the attacker manages to successfully bypass these filters, then the user must determine if the email is legitimate or not. Unfortunately, they often are unable to do so.more » Since attackers have a low success rate, they attempt to compensate for it in volume. In order to have this high throughput, attackers take shortcuts and break protocols. We use this knowledge to address these issues by implementing a system that can detect malicious activity and use it to block attacks. If the client fails to follow proper procedure, they can be classified as an attacker. Once an attacker has been discovered, they will be isolated and monitored. This can be accomplished using existing software in Ubuntu Linux applications, along with our custom wrapper application. After running the system and seeing its performance on three popular Web browsers Chromium, Firefox and Internet Explorer as well as two popular email clients, Thunderbird and Evolution, we found that not only is this system conceivable, it is effective and has low overhead.« less

  4. Get the Word Out with List Servers

    ERIC Educational Resources Information Center

    Goldberg, Laurence

    2006-01-01

    In this article, the author details the use of electronic mail server in their school. In their school district of about 7,300 students in suburban Philadelphia (Abington SD), electronic mail list servers are now being used, along with other methods of communication, to disseminate information quickly and widely. They began by manually maintaining…

  5. Database architectures for Space Telescope Science Institute

    NASA Astrophysics Data System (ADS)

    Lubow, Stephen

    1993-08-01

    At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).

  6. A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services

    NASA Astrophysics Data System (ADS)

    Cho, Kenjiro; Birman, Kenneth P.

    1994-05-01

    This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.

  7. Server-Controlled Identity-Based Authenticated Key Exchange

    NASA Astrophysics Data System (ADS)

    Guo, Hua; Mu, Yi; Zhang, Xiyong; Li, Zhoujun

    We present a threshold identity-based authenticated key exchange protocol that can be applied to an authenticated server-controlled gateway-user key exchange. The objective is to allow a user and a gateway to establish a shared session key with the permission of the back-end servers, while the back-end servers cannot obtain any information about the established session key. Our protocol has potential applications in strong access control of confidential resources. In particular, our protocol possesses the semantic security and demonstrates several highly-desirable security properties such as key privacy and transparency. We prove the security of the protocol based on the Bilinear Diffie-Hellman assumption in the random oracle model.

  8. The new ALICE DQM client: a web access to ROOT-based objects

    NASA Astrophysics Data System (ADS)

    von Haller, B.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Delort, C.; Dénes, E.; Diviá, R.; Fuchs, U.; Niedziela, J.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Wegrzynek, A.

    2015-12-01

    A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) plays an essential role in the experiment operation by providing shifters with immediate feedback on the data being recorded in order to quickly identify and overcome problems. An immediate access to the DQM results is needed not only by shifters in the control room but also by detector experts worldwide. As a consequence, a new web application has been developed to dynamically display and manipulate the ROOT-based objects produced by the DQM system in a flexible and user friendly interface. The architecture and design of the tool, its main features and the technologies that were used, both on the server and the client side, are described. In particular, we detail how we took advantage of the most recent ROOT JavaScript I/O and web server library to give interactive access to ROOT objects stored in a database. We describe as well the use of modern web techniques and packages such as AJAX, DHTMLX and jQuery, which has been instrumental in the successful implementation of a reactive and efficient application. We finally present the resulting application and how code quality was ensured. We conclude with a roadmap for future technical and functional developments.

  9. Client predictors of employment outcomes in high-fidelity supported employment: a regression analysis.

    PubMed

    Campbell, Kikuko; Bond, Gary R; Drake, Robert E; McHugo, Gregory J; Xie, Haiyi

    2010-08-01

    Research on vocational rehabilitation for clients with severe mental illness over the past 2 decades has yielded inconsistent findings regarding client factors statistically related to employment. The present study aimed to elucidate the relationship between baseline client characteristics and competitive employment outcomes-job acquisition and total weeks worked during an 18-month follow-up-in Individual Placement and Support (IPS). Data from 4 recent randomized controlled trials of IPS were aggregated for within-group regression analyses. In the IPS sample (N = 307), work history was the only significant predictor for job acquisition, but receiving Supplemental Security Income-with or without Social Security Disability Insurance-was associated with fewer total weeks worked (2.0%-2.8% of the variance). In the comparison sample (N = 374), clients with a diagnosis of mood disorder or with less severe thought disorder symptoms were more likely to obtain competitive employment. The findings confirm that clients with severe mental illness interested in competitive work best benefit from high-fidelity supported employment regardless of their work history and sociodemographic and clinical background, and highlight the needs for changes in federal policies for disability income support and insurance regulations.

  10. Comparison of tiered formularies and reference pricing policies: a systematic review

    PubMed Central

    Morgan, Steve; Hanley, Gillian; Greyson, Devon

    2009-01-01

    Objectives To synthesize methodologically comparable evidence from the published literature regarding the outcomes of tiered formularies and therapeutic reference pricing of prescription drugs. Methods We searched the following electronic databases: ABI/Inform, CINAHL, Clinical Evidence, Digital Dissertations & Theses, Evidence-Based Medicine Reviews (which incorporates ACP Journal Club, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Cochrane Methodology Register, Database of Abstracts of Reviews of Effectiveness, Health Technology Assessments and NHS Economic Evaluation Database), EconLit, EMBASE, International Pharmaceutical Abstracts, MEDLINE, PAIS International and PAIS Archive, and the Web of Science. We also searched the reference lists of relevant articles and several grey literature sources. We sought English-language studies published from 1986 to 2007 that examined the effects of either therapeutic reference pricing or tiered formularies, reported on outcomes relevant to patient care and cost-effectiveness, and employed quantitative study designs that included concurrent or historical comparison groups. We abstracted and assessed potentially appropriate articles using a modified version of the data abstraction form developed by the Cochrane Effective Practice and Organisation of Care Group. Results From an initial list of 2964 citations, 12 citations (representing 11 studies) were deemed eligible for inclusion in our review: 3 studies (reported in 4 articles) of reference pricing and 8 studies of tiered formularies. The introduction of reference pricing was associated with reduced plan spending, switching to preferred medicines, reduced overall drug utilization and short-term increases in the use of physician services. Reference pricing was not associated with adverse health impacts. The introduction of tiered formularies was associated with reduced plan expenditures, greater patient costs and increased rates of

  11. ORBIT: an integrated environment for user-customized bioinformatics tools.

    PubMed

    Bellgard, M I; Hiew, H L; Hunter, A; Wiebrands, M

    1999-10-01

    There are a large number of computational programs freely available to bioinformaticians via a client/server, web-based environment. However, the client interface to these tools (typically an html form page) cannot be customized from the client side as it is created by the service provider. The form page is usually generic enough to cater for a wide range of users. However, this implies that a user cannot set as 'default' advanced program parameters on the form or even customize the interface to his/her specific requirements or preferences. Currently, there is a lack of end-user interface environments that can be modified by the user when accessing computer programs available on a remote server running on an intranet or over the Internet. We have implemented a client/server system called ORBIT (Online Researcher's Bioinformatics Interface Tools) where individual clients can have interfaces created and customized to command-line-driven, server-side programs. Thus, Internet-based interfaces can be tailored to a user's specific bioinformatic needs. As interfaces are created on the client machine independent of the server, there can be different interfaces to the same server-side program to cater for different parameter settings. The interface customization is relatively quick (between 10 and 60 min) and all client interfaces are integrated into a single modular environment which will run on any computer platform supporting Java. The system has been developed to allow for a number of future enhancements and features. ORBIT represents an important advance in the way researchers gain access to bioinformatics tools on the Internet.

  12. IAIMS Architecture

    PubMed Central

    Hripcsak, George

    1997-01-01

    Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  13. Experimental parametric study of servers cooling management in data centers buildings

    NASA Astrophysics Data System (ADS)

    Nada, S. A.; Elfeky, K. E.; Attia, Ali M. A.; Alshaer, W. G.

    2017-06-01

    A parametric study of air flow and cooling management of data centers servers is experimentally conducted for different design conditions. A physical scale model of data center accommodating one rack of four servers was designed and constructed for testing purposes. Front and rear rack and server's temperatures distributions and supply/return heat indices (SHI/RHI) are used to evaluate data center thermal performance. Experiments were conducted to parametrically study the effects of perforated tiles opening ratio, servers power load variation and rack power density. The results showed that (1) perforated tile of 25% opening ratio provides the best results among the other opening ratios, (2) optimum benefit of cold air in servers cooling is obtained at uniformly power loading of servers (3) increasing power density decrease air re-circulation but increase air bypass and servers temperature. The present results are compared with previous experimental and CFD results and fair agreement was found.

  14. Chair rise capacity and associated factors in older home-care clients.

    PubMed

    Tiihonen, Miia; Hartikainen, Sirpa; Nykänen, Irma

    2017-07-01

    The aim of this study was to investigate the ability of older home-care clients to perform the five times chair rise test and associated personal characteristics, nutritional status and functioning. The study sample included 267 home-care clients aged ≥75 years living in Eastern and Central Finland. The home-care clients were interviewed at home by home-care nurses, nutritionists and pharmacists. The collected data contained sociodemographic factors, functional ability (Barthel Index, IADL), cognitive functioning (MMSE), nutritional status (MNA), depressive symptoms (GDS-15), medical diagnoses and drug use. The primary outcome was the ability to perform the five times chair rise test. Fifty-one per cent ( n=135) of the home-care clients were unable to complete the five times chair rise test. Twenty-three per cent ( n=64) of the home-care clients had good chair rise capacity (≤17 seconds). In a multivariate logistic regression analysis, fewer years of education (odds ratio [OR] = 1.11, 95% confidence interval [CI] 1.04-1.18), lower ADL (OR = 1.54, 95% CI 1.34-1.78) and low MNA scores (OR = 1.12, 95% CI 1.04-1.20) and a higher number of co-morbidities (OR = 1.21, 95% CI 1.02-1.43) were associated with inability to complete the five times chair rise test. Poor functional mobility, which was associated with less education, a high number of co-morbidities and poor nutritional status, was common among older home-care clients. To maintain and to prevent further decline in functional mobility, physical training and nutritional services are needed. (NutOrMed, ClinicalTrials.gov Identifier: NCT02214758).

  15. Predictors for Unplanned Hospitalization of New Home Care Clients.

    PubMed

    Rönneikkö, Jukka K; Mäkelä, Matti; Jämsen, Esa R; Huhtala, Heini; Finne-Soveri, Harriet; Noro, Anja; Valvanne, Jaakko N

    2017-02-01

    To identify factors predicting unplanned hospitalization of new home care clients using the Resident Assessment Instrument for Home Care (RAI-HC). A register-based study based on RAI-HC assessments and nationwide hospital discharge records. Municipal home care services in Finland. New Finnish home care clients aged 63 and older (N = 15,700). Information from home care clients' first RAI-HC assessment was connected to information regarding their first hospitalization over 1 year of follow-up. Multivariate regression analyses were used to evaluate the independent risk factors for hospitalization. Forty-three percent (n = 6,812) of participants were hospitalized at least once. The strongest independent risk factors were hospitalization during the year preceding the RAI-HC assessment (odds ratio (OR) = 2.01, 95% confidence interval (CI) = 1.87-2.16), aged 90 and older (OR = 1.69, 95% CI = 1.48-1.92), renal insufficiency (OR = 1.44, 95% CI = 1.22-1.69) and using 10 or more drugs (OR = 1.41, 95% CI = 1.26-1.58). Other independent risk factors were male sex, previous emergency department visits or other acute outpatient care use, daily urinary incontinence, fecal incontinence, history of falls, cognitive impairment, chronic skin ulcer, pain, unstable health status, housing-related problems, and poor self-rated health. Parkinson's disease, coronary artery disease, congestive heart failure, chronic obstructive pulmonary disease, and cancer were independent prognostic indicators. A body mass index of 24 kg/m 2 or greater and the client's own belief that functional capacity could improve had a protective role. Assessing new home care clients using the RAI-HC reveals modifiable risk factors for unplanned hospitalization. Systematic assessment by a multidisciplinary team at the beginning of the service and targeting modifiable risk factors could reduce the risk of unplanned hospitalization. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics

  16. Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks

    NASA Astrophysics Data System (ADS)

    Langner, Tobias; Schindelhauer, Christian; Souza, Alexander

    We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.

  17. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts

  18. Building Tier 3 Intervention for Long-Term Slow Growers in Grades 3-4: A Pilot Study

    ERIC Educational Resources Information Center

    Sanchez, Victoria; O'Connor, Rollanda E.

    2015-01-01

    Tier 3 interventions are necessary for students who fail to respond adequately to Tier 1 general education instruction and Tier 2 supplemental reading intervention instruction. We identified 8 students in 3rd and 4th grade who had demonstrated a slow response to Tier 2 reading interventions for three years. Students participated in a…

  19. Tiered Systems of Support: Practical Considerations for School Districts. Issue Focus

    ERIC Educational Resources Information Center

    MDRC, 2017

    2017-01-01

    Students learn or progress at their own paces. How can schools make sure that they get the help they need--and only the help they need? Many are turning to multi-tiered systems of support. This brief provides some practical considerations for schools contemplating tiered approaches.

  20. ASPEN--A Web-Based Application for Managing Student Server Accounts

    ERIC Educational Resources Information Center

    Sandvig, J. Christopher

    2004-01-01

    The growth of the Internet has greatly increased the demand for server-side programming courses at colleges and universities. Students enrolled in such courses must be provided with server-based accounts that support the technologies that they are learning. The process of creating, managing and removing large numbers of student server accounts is…