Sample records for reliable server pooling

  1. Geographic information systems - transportation ISTEA management systems server net prototype pooled fund study : phase B - summary

    DOT National Transportation Integrated Search

    1997-06-01

    The Geographic Information System-Transportation (GIS-T) ISTEA Management Systems Server Net Prototype Pooled Fund Study represents the first national cooperative effort in the transportation industry to address the management and monitoring systems ...

  2. Prophinder: a computational tool for prophage prediction in prokaryotic genomes.

    PubMed

    Lima-Mendez, Gipsi; Van Helden, Jacques; Toussaint, Ariane; Leplae, Raphaël

    2008-03-15

    Prophinder is a prophage prediction tool coupled with a prediction database, a web server and web service. Predicted prophages will help to fill the gaps in the current sparse phage sequence space, which should cover an estimated 100 million species. Systematic and reliable predictions will enable further studies of prophages contribution to the bacteriophage gene pool and to better understand gene shuffling between prophages and phages infecting the same host. Softare is available at http://aclame.ulb.ac.be/prophinder

  3. Assessment of physical server reliability in multi cloud computing system

    NASA Astrophysics Data System (ADS)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  4. Improvements in multimedia data buffering using master/slave architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheikh, S.; Ganesan, R.

    1996-12-31

    Advances in the networking technology and multimedia technology has necessitated a need for multimedia servers to be robust and reliable. Existing solutions have direct limitations such as I/O bottleneck and reliability of data retrieval. The system can store the stream of incoming data if enough buffer space is available or the mass storage is clearing the buffer data faster than queue input. A single buffer queue is not sufficient to handle the large frames. Queue sizes are normally several megabytes in length and thus in turn will introduce a state of overflow. The system should also keep track of themore » rewind, fast forwarding, and pause requests, otherwise queue management will become intricate. In this paper, we present a master/slave (server that is designated to monitor the workflow of the complete system. This server holds every other information of slaves by maintaining a dynamic table. It also controls the workload on each of the systems by redistributing request to others or handles the request by itself) approach which will overcome the limitations of today`s storage and also satisfy tomorrow`s storage needs. This approach will maintain the system reliability and yield faster response by using more storage units in parallel. A network of master/slave can handle many requests and synchronize them at all times. Using dedicated CPU and a common pool of queues we explain how queues can be controlled and buffer overflow can be avoided. We propose a layered approach to the buffering problem and provide a read-ahead solution to ensure continuous storage and retrieval of multimedia data.« less

  5. Volume serving and media management in a networked, distributed client/server environment

    NASA Technical Reports Server (NTRS)

    Herring, Ralph H.; Tefend, Linda L.

    1993-01-01

    The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.

  6. Model of load balancing using reliable algorithm with multi-agent system

    NASA Astrophysics Data System (ADS)

    Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.

    2017-04-01

    Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.

  7. POOL server: machine learning application for functional site prediction in proteins.

    PubMed

    Somarowthu, Srinivas; Ondrechen, Mary Jo

    2012-08-01

    We present an automated web server for partial order optimum likelihood (POOL), a machine learning application that combines computed electrostatic and geometric information for high-performance prediction of catalytic residues from 3D structures. Input features consist of THEMATICS electrostatics data and pocket information from ConCavity. THEMATICS measures deviation from typical, sigmoidal titration behavior to identify functionally important residues and ConCavity identifies binding pockets by analyzing the surface geometry of protein structures. Both THEMATICS and ConCavity (structure only) do not require the query protein to have any sequence or structure similarity to other proteins. Hence, POOL is applicable to proteins with novel folds and engineered proteins. As an additional option for cases where sequence homologues are available, users can include evolutionary information from INTREPID for enhanced accuracy in site prediction. The web site is free and open to all users with no login requirements at http://www.pool.neu.edu. m.ondrechen@neu.edu Supplementary data are available at Bioinformatics online.

  8. Data Access Tools And Services At The Goddard Distributed Active Archive Center (GDAAC)

    NASA Technical Reports Server (NTRS)

    Pham, Long; Eng, Eunice; Sweatman, Paul

    2003-01-01

    As one of the largest providers of Earth Science data from the Earth Observing System, GDAAC provides the latest data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Atmospheric Infrared Sounder (AIRS), Solar Radiation and Climate Experiment (SORCE) data products via GDAAC's data pool (50TB of disk cache). In order to make this huge volume of data more accessible to the public and science communities, the GDAAC offers multiple data access tools and services: Open Source Project for Network Data Access Protocol (OPeNDAP), Grid Analysis and Display System (GrADS/DODS) (GDS), Live Access Server (LAS), OpenGlS Web Map Server (WMS) and Near Archive Data Mining (NADM). The objective is to assist users in retrieving electronically a smaller, usable portion of data for further analysis. The OPeNDAP server, formerly known as the Distributed Oceanographic Data System (DODS), allows the user to retrieve data without worrying about the data format. OPeNDAP is capable of server-side subsetting of HDF, HDF-EOS, netCDF, JGOFS, ASCII, DSP, FITS and binary data formats. The GrADS/DODS server is capable of serving the same data formats as OPeNDAP. GDS has an additional feature of server-side analysis. Users can analyze the data on the server there by decreasing the computational load on their client's system. The LAS is a flexible server that allows user to graphically visualize data on the fly, to request different file formats and to compare variables from distributed locations. Users of LAS have options to use other available graphics viewers such as IDL, Matlab or GrADS. WMS is based on the OPeNDAP for serving geospatial information. WMS supports OpenGlS protocol to provide data in GIs-friendly formats for analysis and visualization. NADM is another access to the GDAAC's data pool. NADM gives users the capability to use a browser to upload their C, FORTRAN or IDL algorithms, test the algorithms, and mine data in the data pool. With NADM, the GDAAC provides an environment physically close to the data source. NADM will benefit users with mining or offer data reduction algorithms by reducing large volumes of data before transmission over the network to the user.

  9. Informatics in radiology (infoRAD): A complete continuous-availability PACS archive server.

    PubMed

    Liu, Brent J; Huang, H K; Cao, Fei; Zhou, Michael Z; Zhang, Jianguo; Mogel, Greg

    2004-01-01

    The operational reliability of the picture archiving and communication system (PACS) server in a filmless hospital environment is always a major concern because server failure could cripple the entire PACS operation. A simple, low-cost, continuous-availability (CA) PACS archive server was designed and developed. The server makes use of a triple modular redundancy (TMR) system with a simple majority voting logic that automatically identifies a faulty module and removes it from service. The remaining two modules continue normal operation with no adverse effects on data flow or system performance. In addition, the server is integrated with two external mass storage devices for short- and long-term storage. Evaluation and testing of the server were conducted with laboratory experiments in which hardware failures were simulated to observe recovery time and the resumption of normal data flow. The server provides maximum uptime (99.999%) for end users while ensuring the transactional integrity of all clinical PACS data. Hardware failure has only minimal impact on performance, with no interruption of clinical data flow or loss of data. As hospital PACS become more widespread, the need for CA PACS solutions will increase. A TMR CA PACS archive server can reliably help achieve CA in this setting. Copyright RSNA, 2004

  10. Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report

    DTIC Science & Technology

    2007-02-05

    34* Created new SQL server database for "PC Configuration" web application. Added roles for security closed 4235 and posted application to production. "e Wrote...and ran SQL Server scripts to migrate production databases to new server . "e Created backup jobs for new SQL Server databases. "* Continued...second phase of the TENA demo. Extensive tasking was established and assigned. A TENA interface to EW Server was reaffirmed after some uncertainty about

  11. DPM — efficient storage in diverse environments

    NASA Astrophysics Data System (ADS)

    Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio

    2014-06-01

    Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.

  12. A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services

    NASA Astrophysics Data System (ADS)

    Cho, Kenjiro; Birman, Kenneth P.

    1994-05-01

    This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.

  13. Evaluation of 3D-Jury on CASP7 models.

    PubMed

    Kaján, László; Rychlewski, Leszek

    2007-08-21

    3D-Jury, the structure prediction consensus method publicly available in the Meta Server http://meta.bioinfo.pl/, was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature http://meta.bioinfo.pl/compare_your_model_example.pl available in the Meta Server.

  14. Evaluation of 3D-Jury on CASP7 models

    PubMed Central

    Kaján, László; Rychlewski, Leszek

    2007-01-01

    Background 3D-Jury, the structure prediction consensus method publicly available in the Meta Server , was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. Results The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. Conclusion The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature available in the Meta Server. PMID:17711571

  15. Building a Library Web Server on a Budget.

    ERIC Educational Resources Information Center

    Orr, Giles

    1998-01-01

    Presents a method for libraries with limited budgets to create reliable Web servers with existing hardware and free software available via the Internet. Discusses staff, hardware and software requirements, and security; outlines the assembly process. (PEN)

  16. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR.

    PubMed

    van der Schot, Gijs; Bonvin, Alexandre M J J

    2015-08-01

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on unassigned NOE lists (Huang et al. in J Am Chem Soc 127:1665-1674, 2005b, doi: 10.1021/ja047109h). We compare the original submissions using a previous version of the server based on Rosetta version 2.6 with recalculated targets using the new R3FP fragment picker for fragment selection and implementing a new annotation of prediction reliability (van der Schot et al. in J Biomol NMR 57:27-35, 2013, doi: 10.1007/s10858-013-9762-6), both implemented in the CS-Rosetta3 WeNMR server. In this second round of CASD-NMR, the WeNMR CS-Rosetta server has demonstrated a much better performance than in the first round since only converged targets were submitted. Further, recalculation of all CASD-NMR targets using the new version of the server demonstrates that our new annotation of prediction quality is giving reliable results. Predictions annotated as weak are often found to provide useful models, but only for a fraction of the sequence, and should therefore only be used with caution.

  17. TMFoldWeb: a web server for predicting transmembrane protein fold class.

    PubMed

    Kozma, Dániel; Tusnády, Gábor E

    2015-09-17

    Here we present TMFoldWeb, the web server implementation of TMFoldRec, a transmembrane protein fold recognition algorithm. TMFoldRec uses statistical potentials and utilizes topology filtering and a gapless threading algorithm. It ranks template structures and selects the most likely candidates and estimates the reliability of the obtained lowest energy model. The statistical potential was developed in a maximum likelihood framework on a representative set of the PDBTM database. According to the benchmark test the performance of TMFoldRec is about 77 % in correctly predicting fold class for a given transmembrane protein sequence. An intuitive web interface has been developed for the recently published TMFoldRec algorithm. The query sequence goes through a pipeline of topology prediction and a systematic sequence to structure alignment (threading). Resulting templates are ordered by energy and reliability values and are colored according to their significance level. Besides the graphical interface, a programmatic access is available as well, via a direct interface for developers or for submitting genome-wide data sets. The TMFoldWeb web server is unique and currently the only web server that is able to predict the fold class of transmembrane proteins while assigning reliability scores for the prediction. This method is prepared for genome-wide analysis with its easy-to-use interface, informative result page and programmatic access. Considering the info-communication evolution in the last few years, the developed web server, as well as the molecule viewer, is responsive and fully compatible with the prevalent tablets and mobile devices.

  18. Load Balancing in Distributed Web Caching: A Novel Clustering Approach

    NASA Astrophysics Data System (ADS)

    Tiwari, R.; Kumar, K.; Khan, G.

    2010-11-01

    The World Wide Web suffers from scaling and reliability problems due to overloaded and congested proxy servers. Caching at local proxy servers helps, but cannot satisfy more than a third to half of requests; more requests are still sent to original remote origin servers. In this paper we have developed an algorithm for Distributed Web Cache, which incorporates cooperation among proxy servers of one cluster. This algorithm uses Distributed Web Cache concepts along with static hierarchies with geographical based clusters of level one proxy server with dynamic mechanism of proxy server during the congestion of one cluster. Congestion and scalability problems are being dealt by clustering concept used in our approach. This results in higher hit ratio of caches, with lesser latency delay for requested pages. This algorithm also guarantees data consistency between the original server objects and the proxy cache objects.

  19. GalaxyTBM: template-based modeling by building a reliable core and refining unreliable local regions.

    PubMed

    Ko, Junsu; Park, Hahnbeom; Seok, Chaok

    2012-08-10

    Protein structures can be reliably predicted by template-based modeling (TBM) when experimental structures of homologous proteins are available. However, it is challenging to obtain structures more accurate than the single best templates by either combining information from multiple templates or by modeling regions that vary among templates or are not covered by any templates. We introduce GalaxyTBM, a new TBM method in which the more reliable core region is modeled first from multiple templates and less reliable, variable local regions, such as loops or termini, are then detected and re-modeled by an ab initio method. This TBM method is based on "Seok-server," which was tested in CASP9 and assessed to be amongst the top TBM servers. The accuracy of the initial core modeling is enhanced by focusing on more conserved regions in the multiple-template selection and multiple sequence alignment stages. Additional improvement is achieved by ab initio modeling of up to 3 unreliable local regions in the fixed framework of the core structure. Overall, GalaxyTBM reproduced the performance of Seok-server, with GalaxyTBM and Seok-server resulting in average GDT-TS of 68.1 and 68.4, respectively, when tested on 68 single-domain CASP9 TBM targets. For application to multi-domain proteins, GalaxyTBM must be combined with domain-splitting methods. Application of GalaxyTBM to CASP9 targets demonstrates that accurate protein structure prediction is possible by use of a multiple-template-based approach, and ab initio modeling of variable regions can further enhance the model quality.

  20. Mfold web server for nucleic acid folding and hybridization prediction.

    PubMed

    Zuker, Michael

    2003-07-01

    The abbreviated name, 'mfold web server', describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and 'energy dot plots', are available for the folding of single sequences. A variety of 'bulk' servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as 'MFOLDROOT'.

  1. Mathematical defense method of networked servers with controlled remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2006-05-01

    The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.

  2. Assessment of feasibility of running RSNA's MIRC on a Raspberry Pi: a cost-effective solution for teaching files in radiology.

    PubMed

    Pereira, Andre; Atri, Mostafa; Rogalla, Patrik; Huynh, Thien; O'Malley, Martin E

    2015-11-01

    The value of a teaching case repository in radiology training programs is immense. The allocation of resources for putting one together is a complex issue, given the factors that have to be coordinated: hardware, software, infrastructure, administration, and ethics. Costs may be significant and cost-effective solutions are desirable. We chose Medical Imaging Resource Center (MIRC) to build our teaching file. It is offered by RSNA for free. For the hardware, we chose the Raspberry Pi, developed by the Raspberry Foundation: a small control board developed as a low cost computer for schools also used in alternative projects such as robotics and environmental data collection. Its performance and reliability as a file server were unknown to us. For the operational system, we chose Raspbian, a variant of Debian Linux, along with Apache (web server), MySql (database server) and PHP, which enhance the functionality of the server. A USB hub and an external hard drive completed the setup. Installation of software was smooth. The Raspberry Pi was able to handle very well the task of hosting the teaching file repository for our division. Uptime was logged at 100 %, and loading times were similar to other MIRC sites available online. We setup two servers (one for backup), each costing just below $200.00 including external storage and USB hub. It is feasible to run RSNA's MIRC off a low-cost control board (Raspberry Pi). Performance and reliability are comparable to full-size servers for the intended purpose of hosting a teaching file within an intranet environment.

  3. SARA-Coffee web server, a tool for the computation of RNA sequence and structure multiple alignments

    PubMed Central

    Di Tommaso, Paolo; Bussotti, Giovanni; Kemena, Carsten; Capriotti, Emidio; Chatzou, Maria; Prieto, Pablo; Notredame, Cedric

    2014-01-01

    This article introduces the SARA-Coffee web server; a service allowing the online computation of 3D structure based multiple RNA sequence alignments. The server makes it possible to combine sequences with and without known 3D structures. Given a set of sequences SARA-Coffee outputs a multiple sequence alignment along with a reliability index for every sequence, column and aligned residue. SARA-Coffee combines SARA, a pairwise structural RNA aligner with the R-Coffee multiple RNA aligner in a way that has been shown to improve alignment accuracy over most sequence aligners when enough structural data is available. The server can be accessed from http://tcoffee.crg.cat/apps/tcoffee/do:saracoffee. PMID:24972831

  4. Filmless PACS in a multiple facility environment

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Glicksman, Robert A.; Prior, Fred W.; Siu, Kai-Yeung; Goldburgh, Mitchell M.

    1996-05-01

    A Picture Archiving and Communication System centered on a shared image file server can support a filmless hospital. Systems based on this architecture have proven themselves in over four years of clinical operation. Changes in healthcare delivery are causing radiology groups to support multiple facilities for remote clinic support and consolidation of services. There will be a corresponding need for communicating over a standardized wide area network (WAN). Interactive workflow, a natural extension to the single facility case, requires a means to work effectively and seamlessly across moderate to low speed communication networks. Several schemes for supporting a consortium of medical treatment facilities over a WAN are explored. Both centralized and distributed database approaches are evaluated against several WAN scenarios. Likewise, several architectures for distributing image file servers or buffers over a WAN are explored, along with the caching and distribution strategies that support them. An open system implementation is critical to the success of a wide area system. The role of the Digital Imaging and Communications in Medicine (DICOM) standard in supporting multi- facility and multi-vendor open systems is also addressed. An open system can be achieved by using a DICOM server to provide a view of the system-wide distributed database. The DICOM server interface to a local version of the global database lets a local workstation treat the multiple, distributed data servers as though they were one local server for purposes of examination queries. The query will recover information about the examination that will permit retrieval over the network from the server on which the examination resides. For efficiency reasons, the ability to build cross-facility radiologist worklists and clinician-oriented patient folders is essential. The technologies of the World-Wide-Web can be used to generate worklists and patient folders across facilities. A reliable broadcast protocol may be a convenient way to notify many different users and many image servers about new activities in the network of image servers. In addition to ensuring reliability of message delivery and global serialization of each broadcast message in the network, the broadcast protocol should not introduce significant communication overhead.

  5. Southern California Seismic Network: New Design and Implementation of Redundant and Reliable Real-time Data Acquisition Systems

    NASA Astrophysics Data System (ADS)

    Saleh, T.; Rico, H.; Solanki, K.; Hauksson, E.; Friberg, P.

    2005-12-01

    The Southern California Seismic Network (SCSN) handles more than 2500 high-data rate channels from more than 380 seismic stations distributed across southern California. These data are imported real-time from dataloggers, earthworm hubs, and partner networks. The SCSN also exports data to eight different partner networks. Both the imported and exported data are critical for emergency response and scientific research. Previous data acquisition systems were complex and difficult to operate, because they grew in an ad hoc fashion to meet the increasing needs for distributing real-time waveform data. To maximize reliability and redundancy, we apply best practices methods from computer science for implementing the software and hardware configurations for import, export, and acquisition of real-time seismic data. Our approach makes use of failover software designs, methods for dividing labor diligently amongst the network nodes, and state of the art networking redundancy technologies. To facilitate maintenance and daily operations we seek to provide some separation between major functions such as data import, export, acquisition, archiving, real-time processing, and alarming. As an example, we make waveform import and export functions independent by operating them on separate servers. Similarly, two independent servers provide waveform export, allowing data recipients to implement their own redundancy. The data import is handled differently by using one primary server and a live backup server. These data import servers, run fail-over software that allows automatic role switching in case of failure from primary to shadow. Similar to the classic earthworm design, all the acquired waveform data are broadcast onto a private network, which allows multiple machines to acquire and process the data. As we separate data import and export away from acquisition, we are also working on new approaches to separate real-time processing and rapid reliable archiving of real-time data. Further, improved network security is an integral part of the new design. Redundant firewalls will provide secure data imports, exports, and acquisition as well as DMZ zones for web servers and other publicly available servers. We will present the detailed design of this new configuration that is currently being implemented by the SCSN at Caltech. The design principals are general enough to be of use to most regional seismic networks.

  6. An efficient biometric and password-based remote user authentication using smart card for Telecare Medical Information Systems in multi-server environment.

    PubMed

    Maitra, Tanmoy; Giri, Debasis

    2014-12-01

    The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost.

  7. Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?

  8. Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"

    ERIC Educational Resources Information Center

    Romiszowski, Alexander J.

    2012-01-01

    "Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…

  9. A Comparison Between Publish-and-Subscribe and Client-Server Models in Distributed Control System Networks

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)

    1998-01-01

    The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.

  10. The Raid distributed database system

    NASA Technical Reports Server (NTRS)

    Bhargava, Bharat; Riedl, John

    1989-01-01

    Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.

  11. CPHmodels-3.0--remote homology modeling using structure-guided sequence profiles.

    PubMed

    Nielsen, Morten; Lundegaard, Claus; Lund, Ole; Petersen, Thomas Nordahl

    2010-07-01

    CPHmodels-3.0 is a web server predicting protein 3D structure by use of single template homology modeling. The server employs a hybrid of the scoring functions of CPHmodels-2.0 and a novel remote homology-modeling algorithm. A query sequence is first attempted modeled using the fast CPHmodels-2.0 profile-profile scoring function suitable for close homology modeling. The new computational costly remote homology-modeling algorithm is only engaged provided that no suitable PDB template is identified in the initial search. CPHmodels-3.0 was benchmarked in the CASP8 competition and produced models for 94% of the targets (117 out of 128), 74% were predicted as high reliability models (87 out of 117). These achieved an average RMSD of 4.6 A when superimposed to the 3D structure. The remaining 26% low reliably models (30 out of 117) could superimpose to the true 3D structure with an average RMSD of 9.3 A. These performance values place the CPHmodels-3.0 method in the group of high performing 3D prediction tools. Beside its accuracy, one of the important features of the method is its speed. For most queries, the response time of the server is <20 min. The web server is available at http://www.cbs.dtu.dk/services/CPHmodels/.

  12. Mfold web server for nucleic acid folding and hybridization prediction

    PubMed Central

    Zuker, Michael

    2003-01-01

    The abbreviated name, ‘mfold web server’, describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and ‘energy dot plots’, are available for the folding of single sequences. A variety of ‘bulk’ servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as ‘MFOLDROOT’. PMID:12824337

  13. Architecture-Based Reliability Analysis of Web Services

    ERIC Educational Resources Information Center

    Rahmani, Cobra Mariam

    2012-01-01

    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  14. State dependent arrival in bulk retrial queueing system with immediate Bernoulli feedback, multiple vacations and threshold

    NASA Astrophysics Data System (ADS)

    Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.

    2017-11-01

    The objective of this paper is to analyse state dependent arrival in bulk retrial queueing system with immediate Bernoulli feedback, multiple vacations, threshold and constant retrial policy. Primary customers are arriving into the system in bulk with different arrival rates λ a and λ b . If arriving customers find the server is busy then the entire batch will join to orbit. Customer from orbit request service one by one with constant retrial rate γ. On the other hand if an arrival of customers finds the server is idle then customers will be served in batches according to general bulk service rule. After service completion, customers may request service again with probability δ as feedback or leave from the system with probability 1 - δ. In the service completion epoch, if the orbit size is zero then the server leaves for multiple vacations. The server continues the vacation until the orbit size reaches the value ‘N’ (N > b). At the vacation completion, if the orbit size is ‘N’ then the server becomes ready to provide service for customers from the main pool or from the orbit. For the designed queueing model, probability generating function of the queue size at an arbitrary time will be obtained by using supplementary variable technique. Various performance measures will be derived with suitable numerical illustrations.

  15. Reliable Collection of Real-Time Patient Physiologic Data from less Reliable Networks: a "Monitor of Monitors" System (MoMs).

    PubMed

    Hu, Peter F; Yang, Shiming; Li, Hsiao-Chi; Stansbury, Lynn G; Yang, Fan; Hagegeorge, George; Miller, Catriona; Rock, Peter; Stein, Deborah M; Mackenzie, Colin F

    2017-01-01

    Research and practice based on automated electronic patient monitoring and data collection systems is significantly limited by system down time. We asked whether a triple-redundant Monitor of Monitors System (MoMs) to collect and summarize key information from system-wide data sources could achieve high fault tolerance, early diagnosis of system failure, and improve data collection rates. In our Level I trauma center, patient vital signs(VS) monitors were networked to collect real time patient physiologic data streams from 94 bed units in our various resuscitation, operating, and critical care units. To minimize the impact of server collection failure, three BedMaster® VS servers were used in parallel to collect data from all bed units. To locate and diagnose system failures, we summarized critical information from high throughput datastreams in real-time in a dashboard viewer and compared the before and post MoMs phases to evaluate data collection performance as availability time, active collection rates, and gap duration, occurrence, and categories. Single-server collection rates in the 3-month period before MoMs deployment ranged from 27.8 % to 40.5 % with combined 79.1 % collection rate. Reasons for gaps included collection server failure, software instability, individual bed setting inconsistency, and monitor servicing. In the 6-month post MoMs deployment period, average collection rates were 99.9 %. A triple redundant patient data collection system with real-time diagnostic information summarization and representation improved the reliability of massive clinical data collection to nearly 100 % in a Level I trauma center. Such data collection framework may also increase the automation level of hospital-wise information aggregation for optimal allocation of health care resources.

  16. Performance enhancement of a web-based picture archiving and communication system using commercial off-the-shelf server clusters.

    PubMed

    Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  17. Performance Enhancement of a Web-Based Picture Archiving and Communication System Using Commercial Off-the-Shelf Server Clusters

    PubMed Central

    Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580

  18. Saguaro: a distributed operating system based on pools of servers. Annual report, 1 January 1984-31 December 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, G.R.

    1986-03-03

    Prototypes of components of the Saguaro distributed operating system were implemented and the design of the entire system refined based on the experience. The philosophy behind Saguaro is to support the illusion of a single virtual machine while taking advantage of the concurrency and robustness that are possible in a network architecture. Within the system, these advantages are realized by the use of pools of server processes and decentralized allocation protocols. Potential concurrency and robustness are also made available to the user through low-cost mechanisms to control placement of executing commands and files, and to support semi-transparent file replication andmore » access. Another unique aspect of Saguaro is its extensive use of type system to describe user data such as files and to specify the types of arguments to commands and procedures. This enables the system to assist in type checking and leads to a user interface in which command-specific templates are available to facilitate command invocation. A mechanism, channels, is also provided to enable users to construct applications containing general graphs of communication processes.« less

  19. Deceit: A flexible distributed file system

    NASA Technical Reports Server (NTRS)

    Siegel, Alex; Birman, Kenneth; Marzullo, Keith

    1989-01-01

    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.

  20. CCTOP: a Consensus Constrained TOPology prediction web server.

    PubMed

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Cyber-T web server: differential analysis of high-throughput data.

    PubMed

    Kayala, Matthew A; Baldi, Pierre

    2012-07-01

    The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.

  2. PTB’S Time and Frequency Activities in 2006: New DCF77 Electronics, New NTP Servers, and Calibration Activities

    DTIC Science & Technology

    2007-01-01

    PTTI) Meeting ( TWSTFT ) is being routinely performed with several European and US stations. On the initiative of NICT, a TWSTFT link was...During the last 2 years, PTB has upgraded its TWSTFT and GPS capabilities in order to achieve better reliability and robustness against system failures...NTP-server, and, briefly, the calibration of the international time links, i.e. the result of the latest calibration of the TWSTFT links to the USNO

  3. mtDNA-Server: next-generation sequencing data analysis of human mitochondrial DNA in the cloud.

    PubMed

    Weissensteiner, Hansi; Forer, Lukas; Fuchsberger, Christian; Schöpf, Bernd; Kloss-Brandstätter, Anita; Specht, Günther; Kronenberg, Florian; Schönherr, Sebastian

    2016-07-08

    Next generation sequencing (NGS) allows investigating mitochondrial DNA (mtDNA) characteristics such as heteroplasmy (i.e. intra-individual sequence variation) to a higher level of detail. While several pipelines for analyzing heteroplasmies exist, issues in usability, accuracy of results and interpreting final data limit their usage. Here we present mtDNA-Server, a scalable web server for the analysis of mtDNA studies of any size with a special focus on usability as well as reliable identification and quantification of heteroplasmic variants. The mtDNA-Server workflow includes parallel read alignment, heteroplasmy detection, artefact or contamination identification, variant annotation as well as several quality control metrics, often neglected in current mtDNA NGS studies. All computational steps are parallelized with Hadoop MapReduce and executed graphically with Cloudgene. We validated the underlying heteroplasmy and contamination detection model by generating four artificial sample mix-ups on two different NGS devices. Our evaluation data shows that mtDNA-Server detects heteroplasmies and artificial recombinations down to the 1% level with perfect specificity and outperforms existing approaches regarding sensitivity. mtDNA-Server is currently able to analyze the 1000G Phase 3 data (n = 2,504) in less than 5 h and is freely accessible at https://mtdna-server.uibk.ac.at. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.

    PubMed

    Maghrabi, Ali H A; McGuffin, Liam J

    2017-07-03

    Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. IPTV multicast with peer-assisted lossy error control

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  6. A PDA-based flexible telecommunication system for telemedicine applications.

    PubMed

    Nazeran, Homer; Setty, Sunil; Haltiwanger, Emily; Gonzalez, Virgilio

    2004-01-01

    Technology has been used to deliver health care at a distance for many years. Telemedicine is a rapidly growing area and recently there are studies devoted to prehospital care of patients in emergency cases. In this work we have developed a compact, reliable, and low cost PDA-based telecommunication device for telemedicine applications to transmit audio, still images, and vital signs from a remote site to a fixed station such as a clinic or a hospital in real time. This was achieved based on a client-server architecture. A Pocket PC, a miniature camera, and a hands-free microphone were used at the client site and a desktop computer running the Windows XP operating system was used as a server. The server was located at a fixed station. The system was implemented on TCP/IP and HTTP protocol. Field tests have shown that the system can reliably transmit still images, audio, and sample vital signs from a simulated remote site to a fixed station either via a wired or wireless network in real time. The Pocket PC was used at the client site because of its compact size, low cost and processing capabilities.

  7. Interventions in the alcohol server setting for preventing injuries.

    PubMed

    Ker, K; Chinnock, P

    2006-04-19

    Injuries are a significant public health burden and alcohol intoxication is recognised as a risk factor for injuries. There is increasing attention on supply-side interventions, which aim to modify the environment and context within which alcohol is supplied and consumed. To quantify the effectiveness of interventions implemented in the server setting for reducing injuries. We searched the Cochrane Injuries Group Specialised Register (September 2004), Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 3, 2004), MEDLINE (January 1966 to September 2004), EMBASE (1980 to 2004, wk 36), other specialised databases and reference lists of articles. We also contacted experts in the field. Randomised controlled trials (RCTs) and non-randomised controlled studies (NRS) of the effectiveness of interventions administered in the server setting which attempted to modify the conditions under which alcohol is served and consumed, to facilitate sensible alcohol consumption and reduce the occurrence of alcohol-related harm. Two authors independently screened search results and assessed the full texts of potentially relevant studies for inclusion. Data were extracted and methodological quality was examined. Due to variability in the intervention types investigated, a pooled analysis was not appropriate. Twenty studies met the inclusion criteria. Overall methodological quality was poor. Five studies used an injury outcome measure; only one of these studies was randomised. The studies were grouped into broad categories according to intervention type. One NRS investigated server training and estimated a reduction of 23% in single vehicle night-time crashes in the experimental area (controlled for crashes in the control area). Another NRS examined the impact of a drink driving service, and reported a reduction in injury road crashes of 15% in the experimental area, with no change in the control; no difference was found for fatal crashes. One NRS investigating the impact of a policy intervention, reported that pre-intervention the serious assault rate in the experimental area was 52% higher than the rate in the control area. After intervention, the serious assault rate in the experimental area was 37% lower than in the control. The only RCT targeting the server setting environment with an injury outcome compared toughened glassware (experimental) to annealed glassware (control) on number of bar staff injuries; a greater number of injuries were detected in the experimental group (relative risk 1.72, 95% CI 1.15 to 2.59). A NRS investigating the impact of a intervention aiming to reduce crime experienced by drinking premises; found a lower rate of all crime in the experimental premises (rate ratio 4.6, 95% CI 1.7 to 12, P = 0.01), no difference was found for injury (rate ratio 1.1. 95% CI 0.1 to 10, P = 0.093). The effectiveness of the interventions on patron alcohol consumption is inconclusive. One randomised trial found a statistically significant reduction in observed severe aggression exhibited by patrons. There is some indication of improved server behaviour but it is difficult to predict what effect this might have on injury risk. There is no reliable evidence that interventions in the alcohol server setting are effective in reducing injury. Compliance with interventions appears to be a problem; hence mandated interventions may be more likely to show an effect. Randomised controlled trials, with adequate allocation concealment and blinding are required to improve the evidence base. Further well conducted non-randomised trials are also needed, when random allocation is not feasible.

  8. Interventions in the alcohol server setting for preventing injuries.

    PubMed

    Ker, Katharine; Chinnock, Paul

    2008-07-16

    Injuries are a significant public health burden and alcohol intoxication is recognised as a risk factor for injuries. There is increasing attention on supply-side interventions, which aim to modify the environment and context within which alcohol is supplied and consumed. To quantify the effectiveness of interventions implemented in the server setting for reducing injuries. We searched the Cochrane Injuries Group Specialised Register (September 2004), Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 3, 2004), MEDLINE (January 1966 to September 2004), EMBASE (1980 to 2004, wk 36), other specialised databases and reference lists of articles. We also contacted experts in the field. Randomised controlled trials (RCTs) and non-randomised controlled studies (NRS) of the effectiveness of interventions administered in the server setting which attempted to modify the conditions under which alcohol is served and consumed, to facilitate sensible alcohol consumption and reduce the occurrence of alcohol-related harm. Two authors independently screened search results and assessed the full texts of potentially relevant studies for inclusion. Data were extracted and methodological quality was examined. Due to variability in the intervention types investigated, a pooled analysis was not appropriate. Twenty studies met the inclusion criteria. Overall methodological quality was poor. Five studies used an injury outcome measure; only one of these studies was randomised. The studies were grouped into broad categories according to intervention type. One NRS investigated server training and estimated a reduction of 23% in single vehicle night-time crashes in the experimental area (controlled for crashes in the control area). Another NRS examined the impact of a drink driving service, and reported a reduction in injury road crashes of 15% in the experimental area, with no change in the control; no difference was found for fatal crashes. One NRS investigating the impact of a policy intervention, reported that pre-intervention the serious assault rate in the experimental area was 52% higher than the rate in the control area. After intervention, the serious assault rate in the experimental area was 37% lower than in the control. The only RCT targeting the server setting environment with an injury outcome compared toughened glassware (experimental) to annealed glassware (control) on number of bar staff injuries; a greater number of injuries were detected in the experimental group (relative risk 1.72, 95% CI 1.15 to 2.59). A NRS investigating the impact of a intervention aiming to reduce crime experienced by drinking premises; found a lower rate of all crime in the experimental premises (rate ratio 4.6, 95% CI 1.7 to 12, P = 0.01), no difference was found for injury (rate ratio 1.1. 95% CI 0.1 to 10, P = 0.093). The effectiveness of the interventions on patron alcohol consumption is inconclusive. One randomised trial found a statistically significant reduction in observed severe aggression exhibited by patrons. There is some indication of improved server behaviour but it is difficult to predict what effect this might have on injury risk. There is no reliable evidence that interventions in the alcohol server setting are effective in reducing injury. Compliance with interventions appears to be a problem; hence mandated interventions may be more likely to show an effect. Randomised controlled trials, with adequate allocation concealment and blinding are required to improve the evidence base. Further well conducted non-randomised trials are also needed, when random allocation is not feasible.

  9. A study on M/G/1 retrial G - queue with two phases of service, immediate feedback and working vacations

    NASA Astrophysics Data System (ADS)

    Varalakshmi, M.; Chandrasekaran, V. M.; Saravanarajan, M. C.

    2017-11-01

    In this paper, we discuss about the steady state behaviour of M/G/1 retrial queueing system with two phases of services and immediate feedbacks under working vacation policy where the regular busy server is affected due to the arrival of negative customers. Upon arrival if the customer finds the server busy, breakdown or on working vacation it enters an orbit; otherwise the customer enters into the service area immediately. After service completion, the customer is allowed to make finite number of immediate feedback. The feedback service also consists of two phases. At the service completion epoch of a positive customer, if the orbit is empty the server goes for a working vacation. The server works at a lower service rate during working vacation (WV) period. Using the supplementary variable technique, we found out the steady state probability generating function for the system and in orbit. System performance measures and reliability measures are discussed. Finally, some numerical examples are presented to validate the analyticalresults.

  10. Saguaro: A Distributed Operating System Based on Pools of Servers.

    DTIC Science & Technology

    1988-03-25

    asynchronous message passing, multicast, and semaphores are supported. We have found this flexibility to be very useful for distributed programming. The...variety of communication primitives provided by SR has facilitated the research of Stella Atkins, who was a visiting professor at Arizona during Spring...data bits in a raw communication channel to help keep the source and destination synchronized , Psync explicitly embeds timing information drawn from the

  11. Outcomes of using wet pooling to detect STEC and Salmonella

    USDA-ARS?s Scientific Manuscript database

    Objective: The objective of this work was to examine the reliability of wet pooling sample broths. Experimental Design & Analysis: Fresh sample enrichment broths (n=737) were used to prepare 148 wet pools of 5 broths each. The initial broths and the pools were screened for STEC and Salmonella. ...

  12. The Standard Autonomous File Server, A Customized, Off-the-Shelf Success Story

    NASA Technical Reports Server (NTRS)

    Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper describes the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system has been so successful; it is becoming a NASA standard resource, leading to its nomination for NASA's Software of the Year Award in 1999.

  13. TFmiR: a web server for constructing and analyzing disease-specific transcription factor and miRNA co-regulatory networks.

    PubMed

    Hamed, Mohamed; Spaniol, Christian; Nazarieh, Maryam; Helms, Volkhard

    2015-07-01

    TFmiR is a freely available web server for deep and integrative analysis of combinatorial regulatory interactions between transcription factors, microRNAs and target genes that are involved in disease pathogenesis. Since the inner workings of cells rely on the correct functioning of an enormously complex system of activating and repressing interactions that can be perturbed in many ways, TFmiR helps to better elucidate cellular mechanisms at the molecular level from a network perspective. The provided topological and functional analyses promote TFmiR as a reliable systems biology tool for researchers across the life science communities. TFmiR web server is accessible through the following URL: http://service.bioinformatik.uni-saarland.de/tfmir. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Data grid: a distributed solution to PACS

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyan; Zhang, Jianguo

    2004-04-01

    In a hospital, various kinds of medical images acquired from different modalities are generally used and stored in different department and each modality usually attaches several workstations to display or process images. To do better diagnosis, radiologists or physicians often need to retrieve other kinds of images for reference. The traditional image storage solution is to buildup a large-scale PACS archive server. However, the disadvantages of pure centralized management of PACS archive server are obvious. Besides high costs, any failure of PACS archive server would cripple the entire PACS operation. Here we present a new approach to develop the storage grid in PACS, which can provide more reliable image storage and more efficient query/retrieval for the whole hospital applications. In this paper, we also give the performance evaluation by comparing the three popular technologies mirror, cluster and grid.

  15. Reliability of muscle strength assessment in chronic post-stroke hemiparesis: a systematic review and meta-analysis.

    PubMed

    Rabelo, Michelle; Nunes, Guilherme S; da Costa Amante, Natália Menezes; de Noronha, Marcos; Fachin-Martins, Emerson

    2016-02-01

    Muscle weakness is the main cause of motor impairment among stroke survivors and is associated with reduced peak muscle torque. To systematically investigate and organize the evidence of the reliability of muscle strength evaluation measures in post-stroke survivors with chronic hemiparesis. Two assessors independently searched four electronic databases in January 2014 (Medline, Scielo, CINAHL, Embase). Inclusion criteria comprised studies on reliability on muscle strength assessment in adult post-stroke patients with chronic hemiparesis. We extracted outcomes from included studies about reliability data, measured by intraclass correlation coefficient (ICC) and/or similar. The meta-analyses were conducted only with isokinetic data. Of 450 articles, eight articles were included for this review. After quality analysis, two studies were considered of high quality. Five different joints were analyzed within the included studies (knee, hip, ankle, shoulder, and elbow). Their reliability results varying from low to very high reliability (ICCs from 0.48 to 0.99). Results of meta-analysis for knee extension varying from high to very high reliability (pooled ICCs from 0.89 to 0.97), for knee flexion varying from high to very high reliability (pooled ICCs from 0.84 to 0.91) and for ankle plantar flexion showed high reliability (pooled ICC = 0.85). Objective muscle strength assessment can be reliably used in lower and upper extremities in post-stroke patients with chronic hemiparesis.

  16. Embedded controller for GEM detector readout system

    NASA Astrophysics Data System (ADS)

    Zabołotny, Wojciech M.; Byszuk, Adrian; Chernyshova, Maryna; Cieszewski, Radosław; Czarski, Tomasz; Dominik, Wojciech; Jakubowska, Katarzyna L.; Kasprowicz, Grzegorz; Poźniak, Krzysztof; Rzadkiewicz, Jacek; Scholz, Marek

    2013-10-01

    This paper describes the embedded controller used for the multichannel readout system for the GEM detector. The controller is based on the embedded Mini ITX mainboard, running the GNU/Linux operating system. The controller offers two interfaces to communicate with the FPGA based readout system. FPGA configuration and diagnostics is controlled via low speed USB based interface, while high-speed setup of the readout parameters and reception of the measured data is handled by the PCI Express (PCIe) interface. Hardware access is synchronized by the dedicated server written in C. Multiple clients may connect to this server via TCP/IP network, and different priority is assigned to individual clients. Specialized protocols have been implemented both for low level access on register level and for high level access with transfer of structured data with "msgpack" protocol. High level functionalities have been split between multiple TCP/IP servers for parallel operation. Status of the system may be checked, and basic maintenance may be performed via web interface, while the expert access is possible via SSH server. System was designed with reliability and flexibility in mind.

  17. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed Central

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-01-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing. PMID:3697596

  18. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-03-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing.

  19. FireProt: web server for automated design of thermostable proteins

    PubMed Central

    Musil, Milos; Stourac, Jan; Brezovsky, Jan; Prokop, Zbynek; Zendulka, Jaroslav; Martinek, Tomas

    2017-01-01

    Abstract There is a continuous interest in increasing proteins stability to enhance their usability in numerous biomedical and biotechnological applications. A number of in silico tools for the prediction of the effect of mutations on protein stability have been developed recently. However, only single-point mutations with a small effect on protein stability are typically predicted with the existing tools and have to be followed by laborious protein expression, purification, and characterization. Here, we present FireProt, a web server for the automated design of multiple-point thermostable mutant proteins that combines structural and evolutionary information in its calculation core. FireProt utilizes sixteen tools and three protein engineering strategies for making reliable protein designs. The server is complemented with interactive, easy-to-use interface that allows users to directly analyze and optionally modify designed thermostable mutants. FireProt is freely available at http://loschmidt.chemi.muni.cz/fireprot. PMID:28449074

  20. Prokaryotic Contig Annotation Pipeline Server: Web Application for a Prokaryotic Genome Annotation Pipeline Based on the Shiny App Package.

    PubMed

    Park, Byeonghyeok; Baek, Min-Jeong; Min, Byoungnam; Choi, In-Geol

    2017-09-01

    Genome annotation is a primary step in genomic research. To establish a light and portable prokaryotic genome annotation pipeline for use in individual laboratories, we developed a Shiny app package designated as "P-CAPS" (Prokaryotic Contig Annotation Pipeline Server). The package is composed of R and Python scripts that integrate publicly available annotation programs into a server application. P-CAPS is not only a browser-based interactive application but also a distributable Shiny app package that can be installed on any personal computer. The final annotation is provided in various standard formats and is summarized in an R markdown document. Annotation can be visualized and examined with a public genome browser. A benchmark test showed that the annotation quality and completeness of P-CAPS were reliable and compatible with those of currently available public pipelines.

  1. FRODOCK 2.0: fast protein-protein docking server.

    PubMed

    Ramírez-Aportela, Erney; López-Blanco, José Ramón; Chacón, Pablo

    2016-08-01

    The prediction of protein-protein complexes from the structures of unbound components is a challenging and powerful strategy to decipher the mechanism of many essential biological processes. We present a user-friendly protein-protein docking server based on an improved version of FRODOCK that includes a complementary knowledge-based potential. The web interface provides a very effective tool to explore and select protein-protein models and interactively screen them against experimental distance constraints. The competitive success rates and efficiency achieved allow the retrieval of reliable potential protein-protein binding conformations that can be further refined with more computationally demanding strategies. The server is free and open to all users with no login requirement at http://frodock.chaconlab.org pablo@chaconlab.org Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Implementation of a real-time multi-channel gateway server in ubiquitous integrated biotelemetry system for emergency care (UIBSEC).

    PubMed

    Cheon, Gyeongwoo; Shin, Il Hyung; Jung, Min Yang; Kim, Hee Chan

    2009-01-01

    We developed a gateway server to support various types of bio-signal monitoring devices for ubiquitous emergency healthcare in a reliable, effective, and scalable way. The server provides multiple channels supporting real-time N-to-N client connections. We applied our system to four types of health monitoring devices including a 12-channel electrocardiograph (ECG), oxygen saturation (SpO(2)), and medical imaging devices (a ultrasonograph and a digital skin microscope). Different types of telecommunication networks were tested: WIBRO, CDMA, wireless LAN, and wired internet. We measured the performance of our system in terms of the transmission rate and the number of simultaneous connections. The results show that the proposed network communication strategy can be successfully applied to the ubiquitous emergency healthcare service by providing a fast rate enough for real-time video transmission and multiple connections among patients and medical personnel.

  3. Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF

    NASA Technical Reports Server (NTRS)

    Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.

    2001-01-01

    The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.

  4. Implementing eco friendly highly reliable upload feature using multi 3G service

    NASA Astrophysics Data System (ADS)

    Tanutama, Lukas; Wijaya, Rico

    2017-12-01

    The current trend of eco friendly Internet access is preferred. In this research the understanding of eco friendly is minimum power consumption. The devices that are selected have operationally low power consumption and normally have no power consumption as they are hibernating during idle state. To have the reliability a router of a router that has internal load balancing feature will provide the improvement of previous research on multi 3G services for broadband lines. Previous studies emphasized on accessing and downloading information files from Public Cloud residing Web Servers. The demand is not only for speed but high reliability of access as well. High reliability will mean mitigating both direct and indirect high cost due to repeated attempts of uploading and downloading the large files. Nomadic and mobile computer users need viable solution. Following solution for downloading information has been proposed and tested. The solution is promising. The result is now extended to providing reliable access line by means of redundancy and automatic reconfiguration for uploading and downloading large information files to a Web Server in the Cloud. The technique is taking advantage of internal load balancing feature to provision a redundant line acting as a backup line. A router that has the ability to provide load balancing to several WAN lines is chosen. The WAN lines are constructed using multiple 3G lines. The router supports the accessing Internet with more than one 3G access line which increases the reliability and availability of the Internet access as the second line immediately takes over if the first line is disturbed.

  5. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  6. The Standard Autonomous File Server, a Customized, Off-the-Shelf Success Story

    NASA Technical Reports Server (NTRS)

    Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper will describe the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system his been so successful, it is becoming a NASA standard resource, leading to its nomination for NASA's Software or the Year Award in 1999.

  7. Reliability of abstracting performance measures: results of the cardiac rehabilitation referral and reliability (CR3) project.

    PubMed

    Thomas, Randal J; Chiu, Jensen S; Goff, David C; King, Marjorie; Lahr, Brian; Lichtman, Steven W; Lui, Karen; Pack, Quinn R; Shahriary, Melanie

    2014-01-01

    Assessment of the reliability of performance measure (PM) abstraction is an important step in PM validation. Reliability has not been previously assessed for abstracting PMs for the referral of patients to cardiac rehabilitation (CR) and secondary prevention (SP) programs. To help validate these PMs, we carried out a multicenter assessment of their reliability. Hospitals and clinical practices from around the United States were invited to participate in the Cardiac Rehabilitation Referral Reliability (CR3) Project. Twenty-nine hospitals and 23 outpatient centers expressed interest in participating. Seven hospitals and 6 outpatient centers met participation criteria and submitted completed data. Site coordinators identified 35 patients whose charts were reviewed by 2 site abstractors twice, 1 week apart. Percent agreement and the Cohen κ statistic were used to describe intra- and interabstractor reliability for patient eligibility for CR/SP, patient exceptions for CR/SP referral, and documented referral to CR/SP. Results were obtained from within-site data, as well as from pooled data of all inpatient and all outpatient sites. We found that intra-abstractor reliability reflected excellent repeatability (≥ 90% agreement; κ ≥ 0.75) for ratings of CR/SP eligibility, exceptions, and referral, both from pooled and site-specific analyses of inpatient and outpatient data. Similarly, the interabstractor agreement from pooled analysis ranged from good to excellent for the 3 items, although with slightly lower measures of reliability. Abstraction of PMs for CR/SP referral has high reliability, supporting the use of these PMs in quality improvement initiatives aimed at increasing CR/SP delivery to patients with cardiovascular disease.

  8. DICOM-compliant PACS with CD-based image archival

    NASA Astrophysics Data System (ADS)

    Cox, Robert D.; Henri, Christopher J.; Rubin, Richard K.; Bret, Patrice M.

    1998-07-01

    This paper describes the design and implementation of a low- cost PACS conforming to the DICOM 3.0 standard. The goal was to provide an efficient image archival and management solution on a heterogeneous hospital network as a basis for filmless radiology. The system follows a distributed, client/server model and was implemented at a fraction of the cost of a commercial PACS. It provides reliable archiving on recordable CD and allows access to digital images throughout the hospital and on the Internet. Dedicated servers have been designed for short-term storage, CD-based archival, data retrieval and remote data access or teleradiology. The short-term storage devices provide DICOM storage and query/retrieve services to scanners and workstations and approximately twelve weeks of 'on-line' image data. The CD-based archival and data retrieval processes are fully automated with the exception of CD loading and unloading. The system employs lossless compression on both short- and long-term storage devices. All servers communicate via the DICOM protocol in conjunction with both local and 'master' SQL-patient databases. Records are transferred from the local to the master database independently, ensuring that storage devices will still function if the master database server cannot be reached. The system features rules-based work-flow management and WWW servers to provide multi-platform remote data access. The WWW server system is distributed on the storage, retrieval and teleradiology servers allowing viewing of locally stored image data directly in a WWW browser without the need for data transfer to a central WWW server. An independent system monitors disk usage, processes, network and CPU load on each server and reports errors to the image management team via email. The PACS was implemented using a combination of off-the-shelf hardware, freely available software and applications developed in-house. The system has enabled filmless operation in CT, MR and ultrasound within the radiology department and throughout the hospital. The use of WWW technology has enabled the development of an intuitive we- based teleradiology and image management solution that provides complete access to image data.

  9. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    PubMed

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  10. CICS Region Virtualization for Cost Effective Application Development

    ERIC Educational Resources Information Center

    Khan, Kamal Waris

    2012-01-01

    Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…

  11. LabKey Server: an open source platform for scientific data integration, analysis and collaboration.

    PubMed

    Nelson, Elizabeth K; Piehler, Britt; Eckels, Josh; Rauch, Adam; Bellew, Matthew; Hussey, Peter; Ramsay, Sarah; Nathe, Cory; Lum, Karl; Krouse, Kevin; Stearns, David; Connolly, Brian; Skillman, Tom; Igra, Mark

    2011-03-09

    Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350 organizations. It tracks roughly 27,000 assay runs, 860,000 specimen vials and 1,300,000 vial transfers. Sharing data, analysis tools and infrastructure can speed the efforts of large research consortia by enhancing efficiency and enabling new insights. The Atlas installation of LabKey Server demonstrates the utility of the LabKey platform for collaborative research. Stable, supported builds of LabKey Server are freely available for download at http://www.labkey.org. Documentation and source code are available under the Apache License 2.0.

  12. LabKey Server: An open source platform for scientific data integration, analysis and collaboration

    PubMed Central

    2011-01-01

    Background Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. Results To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350 organizations. It tracks roughly 27,000 assay runs, 860,000 specimen vials and 1,300,000 vial transfers. Conclusions Sharing data, analysis tools and infrastructure can speed the efforts of large research consortia by enhancing efficiency and enabling new insights. The Atlas installation of LabKey Server demonstrates the utility of the LabKey platform for collaborative research. Stable, supported builds of LabKey Server are freely available for download at http://www.labkey.org. Documentation and source code are available under the Apache License 2.0. PMID:21385461

  13. A Framework For Fault Tolerance In Virtualized Servers

    DTIC Science & Technology

    2016-06-01

    effects into the system. Decrease in performance, the expansion in the total system size and weight, and a hike in the system cost can be counted in... benefit also shines out in terms of reliability. 41 4. How Data Guard Synchronizes Standby Databases Primary and standby databases in Oracle Data

  14. Assessment of NDE reliability data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.

    1975-01-01

    Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.

  15. CASTp 3.0: computed atlas of surface topography of proteins.

    PubMed

    Tian, Wei; Chen, Chang; Lei, Xue; Zhao, Jieling; Liang, Jie

    2018-06-01

    Geometric and topological properties of protein structures, including surface pockets, interior cavities and cross channels, are of fundamental importance for proteins to carry out their functions. Computed Atlas of Surface Topography of proteins (CASTp) is a web server that provides online services for locating, delineating and measuring these geometric and topological properties of protein structures. It has been widely used since its inception in 2003. In this article, we present the latest version of the web server, CASTp 3.0. CASTp 3.0 continues to provide reliable and comprehensive identifications and quantifications of protein topography. In addition, it now provides: (i) imprints of the negative volumes of pockets, cavities and channels, (ii) topographic features of biological assemblies in the Protein Data Bank, (iii) improved visualization of protein structures and pockets, and (iv) more intuitive structural and annotated information, including information of secondary structure, functional sites, variant sites and other annotations of protein residues. The CASTp 3.0 web server is freely accessible at http://sts.bioe.uic.edu/castp/.

  16. Distributed Operations Planning

    NASA Technical Reports Server (NTRS)

    Fox, Jason; Norris, Jeffrey; Powell, Mark; Rabe, Kenneth; Shams, Khawaja

    2007-01-01

    Maestro software provides a secure and distributed mission planning system for long-term missions in general, and the Mars Exploration Rover Mission (MER) specifically. Maestro, the successor to the Science Activity Planner, has a heavy emphasis on portability and distributed operations, and requires no data replication or expensive hardware, instead relying on a set of services functioning on JPL institutional servers. Maestro works on most current computers with network connections, including laptops. When browsing down-link data from a spacecraft, Maestro functions similarly to being on a Web browser. After authenticating the user, it connects to a database server to query an index of data products. It then contacts a Web server to download and display the actual data products. The software also includes collaboration support based upon a highly reliable messaging system. Modifications made to targets in one instance are quickly and securely transmitted to other instances of Maestro. The back end that has been developed for Maestro could benefit many future missions by reducing the cost of centralized operations system architecture.

  17. The development of a tele-monitoring system for physiological parameters based on the B/S model.

    PubMed

    Shuicai, Wu; Peijie, Jiang; Chunlan, Yang; Haomin, Li; Yanping, Bai

    2010-01-01

    The development of a new physiological multi-parameter remote monitoring system is based on the B/S model. The system consists of a server monitoring center, Internet network and PC-based multi-parameter monitors. Using the B/S model, the clients can browse web pages via the server monitoring center and download and install ActiveX controls. The physiological multi-parameters are collected, displayed and remotely transmitted. The experimental results show that the system is stable, reliable and operates in real time. The system is suitable for use in physiological multi-parameter remote monitoring for family and community healthcare. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. 78 FR 57375 - Sunshine Act Meeting Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-18

    ... Customer Matters, Reliability, Security and Market Operations. A-3 AD12-12-000 Coordination Between Natural Gas and Electricity Markets. Electric E-1 ER12-1179-003 Southwest Power Pool, Inc. ER12-1179-004 ER12... E-11 ER13-2031-000 Southwest Power Pool, Inc. ER13-2033-000 E-12 ER12-2292-001 Southwest Power Pool...

  19. Metadata based management and sharing of distributed biomedical data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Liu, Peiya

    2014-01-01

    Biomedical research data sharing is becoming increasingly important for researchers to reuse experiments, pool expertise and validate approaches. However, there are many hurdles for data sharing, including the unwillingness to share, lack of flexible data model for providing context information, difficulty to share syntactically and semantically consistent data across distributed institutions, and high cost to provide tools to share the data. SciPort is a web-based collaborative biomedical data sharing platform to support data sharing across distributed organisations. SciPort provides a generic metadata model to flexibly customise and organise the data. To enable convenient data sharing, SciPort provides a central server based data sharing architecture with a one-click data sharing from a local server. To enable consistency, SciPort provides collaborative distributed schema management across distributed sites. To enable semantic consistency, SciPort provides semantic tagging through controlled vocabularies. SciPort is lightweight and can be easily deployed for building data sharing communities. PMID:24834105

  20. Public Auditing with Privacy Protection in a Multi-User Model of Cloud-Assisted Body Sensor Networks

    PubMed Central

    Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu

    2017-01-01

    Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users’ medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs’ applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs’ deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models. PMID:28475110

  1. Operational Experience with the Frontier System in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter

    2012-06-20

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less

  2. Public Auditing with Privacy Protection in a Multi-User Model of Cloud-Assisted Body Sensor Networks.

    PubMed

    Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu

    2017-05-05

    Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users' medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs' applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs' deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models.

  3. Operational Experience with the Frontier System in CMS

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen

    2012-12-01

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.

  4. The validation of a swimming turn wall-contact-time measurement system: a touchpad application reliability study.

    PubMed

    Brackley, Victoria; Ball, Kevin; Tor, Elaine

    2018-05-12

    The effectiveness of the swimming turn is highly influential to overall performance in competitive swimming. The push-off or wall contact, within the turn phase, is directly involved in determining the speed the swimmer leaves the wall. Therefore, it is paramount to develop reliable methods to measure the wall-contact-time during the turn phase for training and research purposes. The aim of this study was to determine the concurrent validity and reliability of the Pool Pad App to measure wall-contact-time during the freestyle and backstroke tumble turn. The wall-contact-times of nine elite and sub-elite participants were recorded during their regular training sessions. Concurrent validity statistics included the standardised typical error estimate, linear analysis and effect sizes while the intraclass correlating coefficient (ICC) was used for the reliability statistics. The standardised typical error estimate resulted in a moderate Cohen's d effect size with an R 2 value of 0.80 and the ICC between the Pool Pad and 2D video footage was 0.89. Despite these measurement differences, the results from this concurrent validity and reliability analyses demonstrated that the Pool Pad is suitable for measuring wall-contact-time during the freestyle and backstroke tumble turn within a training environment.

  5. Reliability and validity of depression assessment among persons with HIV in sub-Saharan Africa: systematic review and meta-analysis

    PubMed Central

    Tsai, Alexander C.

    2014-01-01

    OBJECTIVES To systematically review the reliability and validity of instruments used to screen for major depressive disorder or assess depression symptom severity among persons with HIV in sub-Saharan Africa. DESIGN Systematic review and meta-analysis. METHODS A systematic evidence search protocol was applied to seven bibliographic databases. Studies examining the reliability and/or validity of depression assessment tools were selected for inclusion if they were based on data collected from HIV-positive adults in any African member state of the United Nations. Random-effects meta-analysis was employed to calculate pooled estimates of depression prevalence. In a subgroup of studies of criterion-related validity, the bivariate random-effects model was used to calculate pooled estimates of sensitivity and specificity. RESULTS Of 1,117 records initially identified, I included 13 studies of 5,373 persons with HIV in 7 sub-Saharan African countries. Reported estimates of Cronbach’s alpha ranged from 0.63–0.95, and analyses of internal structure generally confirmed the existence of a depression-like construct accounting for a substantial portion of variance. The pooled prevalence of probable depression was 29.5% (95% CI, 20.5–39.4), while the pooled prevalence of major depressive disorder was 13.9% (95% CI, 9.7–18.6). The Center for Epidemiologic Studies-Depression scale was the most frequently studied instrument, with a pooled sensitivity of 0.82 (95% CI, 0.73–0.87) for detecting major depressive disorder. CONCLUSIONS Depression screening instruments yielded relatively high false positive rates. Overall, few studies described the reliability and/or validity of depression instruments in sub-Saharan Africa. PMID:24853307

  6. ECFS: A decentralized, distributed and fault-tolerant FUSE filesystem for the LHCb online farm

    NASA Astrophysics Data System (ADS)

    Rybczynski, Tomasz; Bonaccorsi, Enrico; Neufeld, Niko

    2014-06-01

    The LHCb experiment records millions of proton collisions every second, but only a fraction of them are useful for LHCb physics. In order to filter out the "bad events" a large farm of x86-servers (~2000 nodes) has been put in place. These servers boot from and run from NFS, however they use their local disk to temporarily store data, which cannot be processed in real-time ("data-deferring"). These events are subsequently processed, when there are no live-data coming in. The effective CPU power is thus greatly increased. This gain in CPU power depends critically on the availability of the local disks. For cost and power-reasons, mirroring (RAID-1) is not used, leading to a lot of operational headache with failing disks and disk-errors or server failures induced by faulty disks. To mitigate these problems and increase the reliability of the LHCb farm, while at same time keeping cost and power-consumption low, an extensive research and study of existing highly available and distributed file systems has been done. While many distributed file systems are providing reliability by "file replication", none of the evaluated ones supports erasure algorithms. A decentralised, distributed and fault-tolerant "write once read many" file system has been designed and implemented as a proof of concept providing fault tolerance without using expensive - in terms of disk space - file replication techniques and providing a unique namespace as a main goals. This paper describes the design and the implementation of the Erasure Codes File System (ECFS) and presents the specialised FUSE interface for Linux. Depending on the encoding algorithm ECFS will use a certain number of target directories as a backend to store the segments that compose the encoded data. When target directories are mounted via nfs/autofs - ECFS will act as a file-system over network/block-level raid over multiple servers.

  7. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud

    PubMed Central

    Karimi, Kamran; Vize, Peter D.

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782

  8. Feasibility of interactive biking exercise system for telemanagement in elderly.

    PubMed

    Finkelstein, Joseph; Jeong, In Cheol

    2013-01-01

    Inexpensive cycling equipment is widely available for home exercise however its use is hampered by lack of tools supporting real-time monitoring of cycling exercise in elderly and coordination with a clinical care team. To address these barriers, we developed a low-cost mobile system aimed at facilitating safe and effective home-based cycling exercise. The system used a miniature wireless 3-axis accelerometer that transmitted the cycling acceleration data to a tablet PC that was integrated with a multi-component disease management system. An exercise dashboard was presented to a patient allowing real-time graphical visualization of exercise progress. The system was programmed to alert patients when exercise intensity exceeded the levels recommended by the patient care providers and to exchange information with a central server. The feasibility of the system was assessed by testing the accuracy of cycling speed monitoring and reliability of alerts generated by the system. Our results demonstrated high validity of the system both for upper and lower extremity exercise monitoring as well as reliable data transmission between home unit and central server.

  9. Privacy-preserving public auditing for data integrity in cloud

    NASA Astrophysics Data System (ADS)

    Shaik Saleem, M.; Murali, M.

    2018-04-01

    Cloud computing which has collected extent concentration from communities of research and with industry research development, a large pool of computing resources using virtualized sharing method like storage, processing power, applications and services. The users of cloud are vend with on demand resources as they want in the cloud computing. Outsourced file of the cloud user can easily tampered as it is stored at the third party service providers databases, so there is no integrity of cloud users data as it has no control on their data, therefore providing security assurance to the users data has become one of the primary concern for the cloud service providers. Cloud servers are not responsible for any data loss as it doesn’t provide the security assurance to the cloud user data. Remote data integrity checking (RDIC) licenses an information to data storage server, to determine that it is really storing an owners data truthfully. RDIC is composed of security model and ID-based RDIC where it is responsible for the security of every server and make sure the data privacy of cloud user against the third party verifier. Generally, by running a two-party Remote data integrity checking (RDIC) protocol the clients would themselves be able to check the information trustworthiness of their cloud. Within the two party scenario the verifying result is given either from the information holder or the cloud server may be considered as one-sided. Public verifiability feature of RDIC gives the privilege to all its users to verify whether the original data is modified or not. To ensure the transparency of the publicly verifiable RDIC protocols, Let’s figure out there exists a TPA who is having knowledge and efficiency to verify the work to provide the condition clearly by publicly verifiable RDIC protocols.

  10. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    PubMed Central

    2012-01-01

    Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423

  11. The reliability of the Australasian Triage Scale: a meta-analysis

    PubMed Central

    Ebrahimi, Mohsen; Heydari, Abbas; Mazlom, Reza; Mirhaghi, Amir

    2015-01-01

    BACKGROUND: Although the Australasian Triage Scale (ATS) has been developed two decades ago, its reliability has not been defined; therefore, we present a meta-analyis of the reliability of the ATS in order to reveal to what extent the ATS is reliable. DATA SOURCES: Electronic databases were searched to March 2014. The included studies were those that reported samples size, reliability coefficients, and adequate description of the ATS reliability assessment. The guidelines for reporting reliability and agreement studies (GRRAS) were used. Two reviewers independently examined abstracts and extracted data. The effect size was obtained by the z-transformation of reliability coefficients. Data were pooled with random-effects models, and meta-regression was done based on the method of moment’s estimator. RESULTS: Six studies were included in this study at last. Pooled coefficient for the ATS was substantial 0.428 (95%CI 0.340–0.509). The rate of mis-triage was less than fifty percent. The agreement upon the adult version is higher than the pediatric version. CONCLUSION: The ATS has shown an acceptable level of overall reliability in the emergency department, but it needs more development to reach an almost perfect agreement. PMID:26056538

  12. Modeling, Simulation and Analysis of Public Key Infrastructure

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)

    1998-01-01

    Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.

  13. Generic Divide and Conquer Internet-Based Computing

    NASA Technical Reports Server (NTRS)

    Radenski, Atanas; Follen, Gregory J. (Technical Monitor)

    2001-01-01

    The rapid growth of internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of new, internet-oriented software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high -performance computing applications community. The general goal of this research project is to contribute to better understanding of the transition to internet-based high -performance computing and to develop solutions for some of the difficulties of this transition. More specifically, our goal is to design an architecture for generic divide and conquer internet-based computing, to develop a portable implementation of this architecture, to create an example library of high-performance divide-and-conquer computing agents that run on top of this architecture, and to evaluate the performance of these agents. We have been designing an architecture that incorporates a master task-pool server and utilizes satellite computational servers that operate on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. Our designed architecture is intended to be complementary to and accessible from computational grids such as Globus, Legion, and Condor. Grids provide remote access to existing high-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end internet nodes. Our project is focused on a generic divide-and-conquer paradigm and its applications that operate on a loose and ever changing pool of lower-end internet nodes.

  14. Current status, uncertainty and future needs in soil organic carbon monitoring.

    PubMed

    Jandl, Robert; Rodeghiero, Mirco; Martinez, Cristina; Cotrufo, M Francesca; Bampa, Francesca; van Wesemael, Bas; Harrison, Robert B; Guerrini, Iraê Amaral; Richter, Daniel Deb; Rustad, Lindsey; Lorenz, Klaus; Chabbi, Abad; Miglietta, Franco

    2014-01-15

    Increasing human demands on soil-derived ecosystem services requires reliable data on global soil resources for sustainable development. The soil organic carbon (SOC) pool is a key indicator of soil quality as it affects essential biological, chemical and physical soil functions such as nutrient cycling, pesticide and water retention, and soil structure maintenance. However, information on the SOC pool, and its temporal and spatial dynamics is unbalanced. Even in well-studied regions with a pronounced interest in environmental issues information on soil carbon (C) is inconsistent. Several activities for the compilation of global soil C data are under way. However, different approaches for soil sampling and chemical analyses make even regional comparisons highly uncertain. Often, the procedures used so far have not allowed the reliable estimation of the total SOC pool, partly because the available knowledge is focused on not clearly defined upper soil horizons and the contribution of subsoil to SOC stocks has been less considered. Even more difficult is quantifying SOC pool changes over time. SOC consists of variable amounts of labile and recalcitrant molecules of plant, and microbial and animal origin that are often operationally defined. A comprehensively active soil expert community needs to agree on protocols of soil surveying and lab procedures towards reliable SOC pool estimates. Already established long-term ecological research sites, where SOC changes are quantified and the underlying mechanisms are investigated, are potentially the backbones for regional, national, and international SOC monitoring programs. © 2013.

  15. A Public-Key Based Authentication and Key Establishment Protocol Coupled with a Client Puzzle.

    ERIC Educational Resources Information Center

    Lee, M. C.; Fung, Chun-Kan

    2003-01-01

    Discusses network denial-of-service attacks which have become a security threat to the Internet community and suggests the need for reliable authentication protocols in client-server applications. Presents a public-key based authentication and key establishment protocol coupled with a client puzzle protocol and validates it through formal logic…

  16. Rocket Science for the Internet

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Rainfinity, a company resulting from the commercialization of Reliable Array of Independent Nodes (RAIN), produces the product, Rainwall. Rainwall runs a cluster of computer workstations, creating a distributed Internet gateway. When Rainwall detects a failure in software or hardware, traffic is shifted to a healthy gateway without interruptions to Internet service. It more evenly distributes workload across servers, providing less down time.

  17. BIPS: BIANA Interolog Prediction Server. A tool for protein-protein interaction inference.

    PubMed

    Garcia-Garcia, Javier; Schleker, Sylvia; Klein-Seetharaman, Judith; Oliva, Baldo

    2012-07-01

    Protein-protein interactions (PPIs) play a crucial role in biology, and high-throughput experiments have greatly increased the coverage of known interactions. Still, identification of complete inter- and intraspecies interactomes is far from being complete. Experimental data can be complemented by the prediction of PPIs within an organism or between two organisms based on the known interactions of the orthologous genes of other organisms (interologs). Here, we present the BIANA (Biologic Interactions and Network Analysis) Interolog Prediction Server (BIPS), which offers a web-based interface to facilitate PPI predictions based on interolog information. BIPS benefits from the capabilities of the framework BIANA to integrate the several PPI-related databases. Additional metadata can be used to improve the reliability of the predicted interactions. Sensitivity and specificity of the server have been calculated using known PPIs from different interactomes using a leave-one-out approach. The specificity is between 72 and 98%, whereas sensitivity varies between 1 and 59%, depending on the sequence identity cut-off used to calculate similarities between sequences. BIPS is freely accessible at http://sbi.imim.es/BIPS.php.

  18. The CUAHSI Water Data Center: Enabling Data Publication, Discovery and Re-use

    NASA Astrophysics Data System (ADS)

    Seul, M.; Pollak, J.

    2014-12-01

    The CUAHSI Water Data Center (WDC) supports a standards-based, services-oriented architecture for time-series data and provides a separate service to publish spatial data layers as shape files. Two new services that the WDC offers are a cloud-based server (Cloud HydroServer) for publishing data and a web-based client for data discovery. The Cloud HydroServer greatly simplifies data publication by eliminating the need for scientists to set up an SQL-server data base, a requirement that has proven to be a significant barrier, and ensures greater reliability and continuity of service. Uploaders have been developed to simplify the metadata documentation process. The web-based data client eliminates the need for installing a program to be used as a client and works across all computer operating systems. The services provided by the WDC is a foundation for big data use, re-use, and meta-analyses. Using data transmission standards enables far more effective data sharing and discovery; standards used by the WDC are part of a global set of standards that should enable scientists to access unprecedented amount of data to address larger-scale research questions than was previously possible. A central mission of the WDC is to ensure these services meet the needs of the water science community and are effective at advancing water science.

  19. SYNCSA--R tool for analysis of metacommunities based on functional traits and phylogeny of the community components.

    PubMed

    Debastiani, Vanderlei J; Pillar, Valério D

    2012-08-01

    SYNCSA is an R package for the analysis of metacommunities based on functional traits and phylogeny of the community components. It offers tools to calculate several matrix correlations that express trait-convergence assembly patterns, trait-divergence assembly patterns and phylogenetic signal in functional traits at the species pool level and at the metacommunity level. SYNCSA is a package for the R environment, under a GPL-2 open-source license and freely available on CRAN official web server for R (http://cran.r-project.org). vanderleidebastiani@yahoo.com.br.

  20. Exploiting volatile opportunistic computing resources with Lobster

    NASA Astrophysics Data System (ADS)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  1. Unidata Cyberinfrastructure in the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Young, J. W.

    2016-12-01

    Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.

  2. [Reliability and validity of depression scales of Chinese version: a systematic review].

    PubMed

    Sun, X Y; Li, Y X; Yu, C Q; Li, L M

    2017-01-10

    Objective: Through systematically reviewing the reliability and validity of depression scales of Chinese version in adults in China to evaluate the psychometric properties of depression scales for different groups. Methods: Eligible studies published before 6 May 2016 were retrieved from the following database: CNKI, Wanfang, PubMed and Embase. The HSROC model of the diagnostic test accuracy (DTA) for Meta-analysis was used to calculate the pooled sensitivity and specificity of the PHQ-9. Results: A total of 44 papers evaluating the performance of depression scales were included. Results showed that the reliability and validity of the common depression scales were eligible, including the Beck depression inventory (BDI), the Hamilton depression scale (HAMD), the center epidemiological studies depression scale (CES-D), the patient health questionnaire (PHQ) and the Geriatric depression scale (GDS). The Cronbach' s coefficient of most tools were larger than 0.8, while the test-retest reliability and split-half reliability were larger than 0.7, indicating good internal consistency and stability. The criterion validity, convergent validity, discrimination validity and screening validity were acceptable though different cut-off points were recommended by different studies. The pooled sensitivity of the 11 studies evaluating PHQ-9 was 0.88 (95 %CI : 0.85-0.91) while the pooled specificity was 0.89 (95 %CI : 0.82-0.94), which demonstrated the applicability of PHQ-9 in screening depression. Conclusion: The reliability and validity of different depression scales of Chinese version are acceptable. The characteristics of different tools and study population should be taken into consideration when choosing a specific scale.

  3. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.

    PubMed

    Karimi, Kamran; Vize, Peter D

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.

  4. Achieving Reliable Communication in Dynamic Emergency Responses

    PubMed Central

    Chipara, Octav; Plymoth, Anders N.; Liu, Fang; Huang, Ricky; Evans, Brian; Johansson, Per; Rao, Ramesh; Griswold, William G.

    2011-01-01

    Emergency responses require the coordination of first responders to assess the condition of victims, stabilize their condition, and transport them to hospitals based on the severity of their injuries. WIISARD is a system designed to facilitate the collection of medical information and its reliable dissemination during emergency responses. A key challenge in WIISARD is to deliver data with high reliability as first responders move and operate in a dynamic radio environment fraught with frequent network disconnections. The initial WIISARD system employed a client-server architecture and an ad-hoc routing protocol was used to exchange data. The system had low reliability when deployed during emergency drills. In this paper, we identify the underlying causes of unreliability and propose a novel peer-to-peer architecture that in combination with a gossip-based communication protocol achieves high reliability. Empirical studies show that compared to the initial WIISARD system, the redesigned system improves reliability by as much as 37% while reducing the number of transmitted packets by 23%. PMID:22195075

  5. Daily Planet Imagery: GIBS MODIS Products on ArcGIS Online

    NASA Astrophysics Data System (ADS)

    Plesea, L.

    2015-12-01

    The NASA EOSDIS Global Imagery Browse Services (GIBS) is rapidly becoming an invaluable GIS resource for the science community and for the public at large. Reliable, fast access to historical as well as near real time, georeferenced images form a solid basis on which many innovative applications and projects can be built. Esri has recognized the value of this effort and is a GIBS user and collaborator. To enable the use of GIBS services within the ArcGIS ecosystem, Esri has built a GIBS reflector server at http://modis.arcgis.com, server which offers the facilities of a time enabled Mosaic Service on top of the GIBS provided images. Currently the MODIS reflectance products are supported by this mosaic service, possibilities of handling other GIBS products are being explored. This reflector service is deployed on the Amazon Elastic Compute Cloud platform, and is freely available to the end users. Due to the excellent response time from GIBS, image tiles do not have to be stored by the Esri mosaic server, all needed data being retrieved directly from GIBS when needed, continuously reflecting the state of GIBS, and greatly simplifying the maintenance of this service. Response latency is usually under one second, making it easy to interact with the data. The remote data access is achieved by using the Geospatial Data Abstraction Library (GDAL) Tiled Web Map Server (TWMS) driver. The response time of this server is excellent, usually under one second. The MODIS imagery has proven to be one of the most popular ones on the ArcGIS Online platform, where it is frequently use to provide temporal context to maps, or by itself, to tell a compelling story.

  6. TIPMaP: a web server to establish transcript isoform profiles from reliable microarray probes.

    PubMed

    Chitturi, Neelima; Balagannavar, Govindkumar; Chandrashekar, Darshan S; Abinaya, Sadashivam; Srini, Vasan S; Acharya, Kshitish K

    2013-12-27

    Standard 3' Affymetrix gene expression arrays have contributed a significantly higher volume of existing gene expression data than other microarray platforms. These arrays were designed to identify differentially expressed genes, but not their alternatively spliced transcript forms. No resource can currently identify expression pattern of specific mRNA forms using these microarray data, even though it is possible to do this. We report a web server for expression profiling of alternatively spliced transcripts using microarray data sets from 31 standard 3' Affymetrix arrays for human, mouse and rat species. The tool has been experimentally validated for mRNAs transcribed or not-detected in a human disease condition (non-obstructive azoospermia, a male infertility condition). About 4000 gene expression datasets were downloaded from a public repository. 'Good probes' with complete coverage and identity to latest reference transcript sequences were first identified. Using them, 'Transcript specific probe-clusters' were derived for each platform and used to identify expression status of possible transcripts. The web server can lead the user to datasets corresponding to specific tissues, conditions via identifiers of the microarray studies or hybridizations, keywords, official gene symbols or reference transcript identifiers. It can identify, in the tissues and conditions of interest, about 40% of known transcripts as 'transcribed', 'not-detected' or 'differentially regulated'. Corresponding additional information for probes, genes, transcripts and proteins can be viewed too. We identified the expression of transcripts in a specific clinical condition and validated a few of these transcripts by experiments (using reverse transcription followed by polymerase chain reaction). The experimental observations indicated higher agreements with the web server results, than contradictions. The tool is accessible at http://resource.ibab.ac.in/TIPMaP. The newly developed online tool forms a reliable means for identification of alternatively spliced transcript-isoforms that may be differentially expressed in various tissues, cell types or physiological conditions. Thus, by making better use of existing data, TIPMaP avoids the dependence on precious tissue-samples, in experiments with a goal to establish expression profiles of alternative splice forms--at least in some cases.

  7. RabbitQR: fast and flexible big data processing at LSST data rates using existing, shared-use hardware

    NASA Astrophysics Data System (ADS)

    Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi

    2016-08-01

    Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.

  8. Responsiveness to change and reliability of measurement of radiographic joint space width in osteoarthritis of the knee: a systematic review.

    PubMed

    Reichmann, W M; Maillefert, J F; Hunter, D J; Katz, J N; Conaghan, P G; Losina, E

    2011-05-01

    The goal of this systematic review was to report the responsiveness to change and reliability of conventional radiographic joint space width (JSW) measurement. We searched the PubMed and Embase databases using the following search criteria: [osteoarthritis (OA) (MeSH)] AND (knee) AND (X-ray OR radiography OR diagnostic imaging OR radiology OR disease progression) AND (joint space OR JSW or disease progression). We assessed responsiveness by calculating the standardized response mean (SRM). We assessed reliability using intra- and inter-reader intra-class correlation (ICC) and coefficient of variation (CV). Random-effects models were used to pool results from multiple studies. Results were stratified by study duration, design, techniques of obtaining radiographs, and measurement method. We identified 998 articles using the search terms. Of these, 32 articles (43 estimates) reported data on responsiveness of JSW measurement and 24 (50 estimates) articles reported data on measures of reliability. The overall pooled SRM was 0.33 [95% confidence interval (CI): 0.26, 0.41]. Responsiveness of change in JSW measurement was improved substantially in studies of greater than 2 years duration (0.57). Further stratifying this result in studies of greater than 2 years duration, radiographs obtained with the knee in a flexed position yielded an SRM of 0.71. Pooled intra-reader ICC was estimated at 0.97 (95% CI: 0.92, 1.00) and the intra-reader CV estimated at 3.0 (95% CI: 2.0, 4.0). Pooled inter-reader ICC was estimated at 0.93 (95% CI: 0.86, 0.99) and the inter-reader CV estimated at 3.4% (95% CI: 1.3%, 5.5%). Measurement of JSW obtained from radiographs in persons with knee is reliable. These data will be useful to clinicians who are planning RCTs where the change in minimum JSW is the outcome of interest. Copyright © 2011 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  9. ViPAR: a software platform for the Virtual Pooling and Analysis of Research Data.

    PubMed

    Carter, Kim W; Francis, Richard W; Carter, K W; Francis, R W; Bresnahan, M; Gissler, M; Grønborg, T K; Gross, R; Gunnes, N; Hammond, G; Hornig, M; Hultman, C M; Huttunen, J; Langridge, A; Leonard, H; Newman, S; Parner, E T; Petersson, G; Reichenberg, A; Sandin, S; Schendel, D E; Schalkwyk, L; Sourander, A; Steadman, C; Stoltenberg, C; Suominen, A; Surén, P; Susser, E; Sylvester Vethanayagam, A; Yusof, Z

    2016-04-01

    Research studies exploring the determinants of disease require sufficient statistical power to detect meaningful effects. Sample size is often increased through centralized pooling of disparately located datasets, though ethical, privacy and data ownership issues can often hamper this process. Methods that facilitate the sharing of research data that are sympathetic with these issues and which allow flexible and detailed statistical analyses are therefore in critical need. We have created a software platform for the Virtual Pooling and Analysis of Research data (ViPAR), which employs free and open source methods to provide researchers with a web-based platform to analyse datasets housed in disparate locations. Database federation permits controlled access to remotely located datasets from a central location. The Secure Shell protocol allows data to be securely exchanged between devices over an insecure network. ViPAR combines these free technologies into a solution that facilitates 'virtual pooling' where data can be temporarily pooled into computer memory and made available for analysis without the need for permanent central storage. Within the ViPAR infrastructure, remote sites manage their own harmonized research dataset in a database hosted at their site, while a central server hosts the data federation component and a secure analysis portal. When an analysis is initiated, requested data are retrieved from each remote site and virtually pooled at the central site. The data are then analysed by statistical software and, on completion, results of the analysis are returned to the user and the virtually pooled data are removed from memory. ViPAR is a secure, flexible and powerful analysis platform built on open source technology that is currently in use by large international consortia, and is made publicly available at [http://bioinformatics.childhealthresearch.org.au/software/vipar/]. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.

  10. Probing Reliability of Transport Phenomena Based Heat Transfer and Fluid Flow Analysis in Autogeneous Fusion Welding Process

    NASA Astrophysics Data System (ADS)

    Bag, S.; de, A.

    2010-09-01

    The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.

  11. Building a Gateway for the CD-ROM Network: A Step toward the Virtual Library with the Virtual Microsystems V-Server.

    ERIC Educational Resources Information Center

    Sylvia, Margaret

    1993-01-01

    Describes one college library's experience with a gateway for dial-in access to its CD-ROM network to increase access to automated index searching for students off-campus. Hardware and software choices are discussed in terms of access, reliability, affordability, and ease of use. Installation problems are discussed, and an appendix lists product…

  12. Measurement and Data Transmission Validity of a Multi-Biosensor System for Real-Time Remote Exercise Monitoring Among Cardiac Patients.

    PubMed

    Rawstorn, Jonathan C; Gant, Nicholas; Warren, Ian; Doughty, Robert Neil; Lever, Nigel; Poppe, Katrina K; Maddison, Ralph

    2015-03-20

    Remote telemonitoring holds great potential to augment management of patients with coronary heart disease (CHD) and atrial fibrillation (AF) by enabling regular physiological monitoring during physical activity. Remote physiological monitoring may improve home and community exercise-based cardiac rehabilitation (exCR) programs and could improve assessment of the impact and management of pharmacological interventions for heart rate control in individuals with AF. Our aim was to evaluate the measurement validity and data transmission reliability of a remote telemonitoring system comprising a wireless multi-parameter physiological sensor, custom mobile app, and middleware platform, among individuals in sinus rhythm and AF. Participants in sinus rhythm and with AF undertook simulated daily activities, low, moderate, and/or high intensity exercise. Remote monitoring system heart rate and respiratory rate were compared to reference measures (12-lead ECG and indirect calorimeter). Wireless data transmission loss was calculated between the sensor, mobile app, and remote Internet server. Median heart rate (-0.30 to 1.10 b∙min -1 ) and respiratory rate (-1.25 to 0.39 br∙min -1 ) measurement biases were small, yet statistically significant (all P≤.003) due to the large number of observations. Measurement reliability was generally excellent (rho=.87-.97, all P<.001; intraclass correlation coefficient [ICC]=.94-.98, all P<.001; coefficient of variation [CV]=2.24-7.94%), although respiratory rate measurement reliability was poor among AF participants (rho=.43, P<.001; ICC=.55, P<.001; CV=16.61%). Data loss was minimal (<5%) when all system components were active; however, instability of the network hosting the remote data capture server resulted in data loss at the remote Internet server during some trials. System validity was sufficient for remote monitoring of heart and respiratory rates across a range of exercise intensities. Remote exercise monitoring has potential to augment current exCR and heart rate control management approaches by enabling the provision of individually tailored care to individuals outside traditional clinical environments. ©Jonathan C Rawstorn, Nicholas Gant, Ian Warren, Robert Neil Doughty, Nigel Lever, Katrina K Poppe, Ralph Maddison. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 20.03.2015.

  13. PIQMIe: a web server for semi-quantitative proteomics data management and analysis

    PubMed Central

    Kuzniar, Arnold; Kanaar, Roland

    2014-01-01

    We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. PMID:24861615

  14. J2ME implementation of system for storing and accessing of sensitive data on patient's mobile device

    NASA Astrophysics Data System (ADS)

    Zabołotny, Wojciech M.; Wielgórski, Radosław; Nowik, Marcin

    2011-10-01

    This paper presents a system allowing to use a patient's mobile phone or PDA for storing of biomedical data, which then, during medical consultation or intervention may be used by the medical staff. The presented solution is aimed on providing both: reliable protection to sensitive patient's data, and easy access to information for authorized medical staff. In the presented system, data are stored in an encrypted form, and the encryption key is available only for authorized persons. The central authentication server verifies the current access rights of the person trying to obtain the information, before providing him or her with the key needed to access the patient's data. The key provided by the server is valid only for the particular device, which minimizes the risk of its misuse. For rare situations when no connection to the authentication server is available (e.g. intervention in the mountains or rural area), system assures an additional "emergency" method to access the encryption key in controlled, registered way. The system has been implemented in Java language and tested in the simulated environment provided by Sun Java Wireless Toolkit for CLDC.

  15. PIQMIe: a web server for semi-quantitative proteomics data management and analysis.

    PubMed

    Kuzniar, Arnold; Kanaar, Roland

    2014-07-01

    We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Network time synchronization servers at the US Naval Observatory

    NASA Technical Reports Server (NTRS)

    Schmidt, Richard E.

    1995-01-01

    Responding to an increased demand for reliable, accurate time on the Internet and Milnet, the U.S. Naval Observatory Time Service has established the network time servers, tick.usno.navy.mil and tock.usno.navy.mil. The system clocks of these HP9000/747i industrial work stations are synchronized to within a few tens of microseconds of USNO Master Clock 2 using VMEbus IRIG-B interfaces. Redundant time code is available from a VMEbus GPS receiver. UTC(USNO) is provided over the network via a number of protocols, including the Network Time Protocol (NTP) (DARPA Network Working Group Report RFC-1305), the Daytime Protocol (RFC-867), and the Time protocol (RFC-868). Access to USNO network time services is presently open and unrestricted. An overview of USNO time services and results of LAN and WAN time synchronization tests will be presented.

  17. High-speed network for delivery of education-on-demand

    NASA Astrophysics Data System (ADS)

    Cordero, Carlos; Harris, Dale; Hsieh, Jeff

    1996-03-01

    A project to investigate the feasibility of delivering on-demand distance education to the desktop, known as the Asynchronous Distance Education ProjecT (ADEPT), is presently being carried out. A set of Stanford engineering classes is digitized on PC, Macintosh, and UNIX platforms, and is made available on servers. Students on campus and in industry may then access class material on these servers via local and metropolitan area networks. Students can download class video and audio, encoded in QuickTimeTM and Show-Me TVTM formats, via file-transfer protocol or the World Wide Web. Alternatively, they may stream a vector-quantized version of the class directly from a server for real-time playback. Students may also download PostscriptTM and Adobe AcrobatTM versions of class notes. Off-campus students may connect to ADEPT servers via the internet, the Silicon Valley Test Track (SVTT), or the Bay-Area Gigabit Network (BAGNet). The SVTT and BAGNet are high-speed metropolitan-area networks, spanning the Bay Area, which provide IP access over asynchronous transfer mode (ATM). Student interaction is encouraged through news groups, electronic mailing lists, and an ADEPT home page. Issues related to having multiple platforms and interoperability are examined in this paper. The ramifications of providing a reliable service are discussed. System performance and the parameters that affect it are then described. Finally, future work on expanding ATM access, real-time delivery of classes, and enhanced student interaction is described.

  18. Video streaming technologies using ActiveX and LabVIEW

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2015-06-01

    The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.

  19. Assessment of NDE Reliability Data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.

    1976-01-01

    Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

  20. Quality and rigor of the concept mapping methodology: a pooled study analysis.

    PubMed

    Rosas, Scott R; Kane, Mary

    2012-05-01

    The use of concept mapping in research and evaluation has expanded dramatically over the past 20 years. Researchers in academic, organizational, and community-based settings have applied concept mapping successfully without the benefit of systematic analyses across studies to identify the features of a methodologically sound study. Quantitative characteristics and estimates of quality and rigor that may guide for future studies are lacking. To address this gap, we conducted a pooled analysis of 69 concept mapping studies to describe characteristics across study phases, generate specific indicators of validity and reliability, and examine the relationship between select study characteristics and quality indicators. Individual study characteristics and estimates were pooled and quantitatively summarized, describing the distribution, variation and parameters for each. In addition, variation in the concept mapping data collection in relation to characteristics and estimates was examined. Overall, results suggest concept mapping yields strong internal representational validity and very strong sorting and rating reliability estimates. Validity and reliability were consistently high despite variation in participation and task completion percentages across data collection modes. The implications of these findings as a practical reference to assess the quality and rigor for future concept mapping studies are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. GEOMAGIA50: An archeointensity database with PHP and MySQL

    NASA Astrophysics Data System (ADS)

    Korhonen, K.; Donadini, F.; Riisager, P.; Pesonen, L. J.

    2008-04-01

    The GEOMAGIA50 database stores 3798 archeomagnetic and paleomagnetic intensity determinations dated to the past 50,000 years. It also stores details of the measurement setup for each determination, which are used for ranking the data according to prescribed reliability criteria. The ranking system aims to alleviate the data reliability problem inherent in this kind of data. GEOMAGIA50 is based on two popular open source technologies. The MySQL database management system is used for storing the data, whereas the functionality and user interface are provided by server-side PHP scripts. This technical brief gives a detailed description of GEOMAGIA50 from a technical viewpoint.

  2. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  3. Detrital carbon pools in temperate forests: magnitude and potential for landscape-scale assessment

    Treesearch

    John B. Bradford; Peter Weishampel; Marie-Louise Smith; Randall Kolka; Richard A. Birdsey; Scott V. Ollinger; Michael G. Ryan

    2009-01-01

    Reliably estimating carbon storage and cycling in detrital biomass is an obstacle to carbon accounting. We examined carbon pools and fluxes in three small temperate forest landscapes to assess the magnitude of carbon stored in detrital biomass and determine whether detrital carbon storage is related to stand structural properties (leaf area, aboveground biomass,...

  4. Measurement of patient safety: a systematic review of the reliability and validity of adverse event detection with record review

    PubMed Central

    Hanskamp-Sebregts, Mirelle; Zegers, Marieke; Vincent, Charles; van Gurp, Petra J; de Vet, Henrica C W; Wollersheim, Hub

    2016-01-01

    Objectives Record review is the most used method to quantify patient safety. We systematically reviewed the reliability and validity of adverse event detection with record review. Design A systematic review of the literature. Methods We searched PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Library and from their inception through February 2015. We included all studies that aimed to describe the reliability and/or validity of record review. Two reviewers conducted data extraction. We pooled κ values (κ) and analysed the differences in subgroups according to number of reviewers, reviewer experience and training level, adjusted for the prevalence of adverse events. Results In 25 studies, the psychometric data of the Global Trigger Tool (GTT) and the Harvard Medical Practice Study (HMPS) were reported and 24 studies were included for statistical pooling. The inter-rater reliability of the GTT and HMPS showed a pooled κ of 0.65 and 0.55, respectively. The inter-rater agreement was statistically significantly higher when the group of reviewers within a study consisted of a maximum five reviewers. We found no studies reporting on the validity of the GTT and HMPS. Conclusions The reliability of record review is moderate to substantial and improved when a small group of reviewers carried out record review. The validity of the record review method has never been evaluated, while clinical data registries, autopsy or direct observations of patient care are potential reference methods that can be used to test concurrent validity. PMID:27550650

  5. NOAA Operational Model Archive Distribution System (NOMADS): High Availability Applications for Reliable Real Time Access to Operational Model Data

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Wang, J.

    2009-12-01

    To reduce the impact of natural hazards and environmental changes, the National Centers for Environmental Prediction (NCEP) provide first alert and a preferred partner for environmental prediction services, and represents a critical national resource to operational and research communities affected by climate, weather and water. NOMADS is now delivering high availability services as part of NOAA’s official real time data dissemination at its Web Operations Center (WOC) server. The WOC is a web service used by organizational units in and outside NOAA, and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The user (client) executes what is efficient to execute on the client and the server efficiently provides format independent access services. Client applications can execute on the server, if it is desired, but the same program can be executed on the client side with no loss of efficiency. In this way this paradigm lends itself to aggregation servers that act as servers of servers listing, searching catalogs of holdings, data mining, and updating information from the metadata descriptions that enable collections of data in disparate places to be simultaneously accessed, with results processed on servers and clients to produce a needed answer. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. We demonstrate how users can use NOMADS services to select the values of Ensemble model runs over the ith Ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6-Dimensional data cube of access across the internet. The example application called the “Ensemble Probability Tool” make probability predictions of user defined weather events that can be used in remote areas for weather vulnerable circumstances. An application to access data for a verification pilot study is shown in detail in a companion paper (U06) collaboration with the World Bank and is an example of high value, usability and relevance of NCEP products and service capability over a wide spectrum of user and partner needs.

  6. Design of fuel cell powered data centers for sufficient reliability and availability

    NASA Astrophysics Data System (ADS)

    Ritchie, Alexa J.; Brouwer, Jacob

    2018-04-01

    It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.

  7. Assuring Software Reliability

    DTIC Science & Technology

    2014-08-01

    technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner. 1.3 Security Example...that took three high-voltage lines out of service and a software fail- ure (a race condition3) that disabled the computing service that notified the... service had failed. Instead of analyzing the details of the alarm server failure, the reviewers asked why the following software assurance claim had

  8. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    NASA Astrophysics Data System (ADS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-12-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  9. Assembly reliability of CSPs with various chiip sizes by accelerated thermal and mechanical cycling test

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.

    2000-01-01

    A JPL-led chip scale package (CSP) Consortium, composed of team members representing government agencies and private companies, recently joined together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects.

  10. Radioisotope Power System Pool Concept

    NASA Technical Reports Server (NTRS)

    Rusick, Jeffrey J.; Bolotin, Gary S.

    2015-01-01

    Advanced Radioisotope Power Systems (RPS) for NASA deep space science missions have historically used static thermoelectric-based designs because they are highly reliable, and their radioisotope heat sources can be passively cooled throughout the mission life cycle. Recently, a significant effort to develop a dynamic RPS, the Advanced Stirling Radioisotope Generator (ASRG), was conducted by NASA and the Department of Energy, because Stirling based designs offer energy conversion efficiencies four times higher than heritage thermoelectric designs; and the efficiency would proportionately reduce the amount of radioisotope fuel needed for the same power output. However, the long term reliability of a Stirling based design is a concern compared to thermoelectric designs, because for certain Stirling system architectures the radioisotope heat sources must be actively cooled via the dynamic operation of Stirling converters throughout the mission life cycle. To address this reliability concern, a new dynamic Stirling cycle RPS architecture is proposed called the RPS Pool Concept.

  11. The problem of deriving the field-induced thermal emission in Poole-Frenkel theories

    NASA Astrophysics Data System (ADS)

    Ongaro, R.; Pillonnet, A.

    1992-10-01

    A discussion is made of the legitimity of implementing the usual model of field-assisted release of electrons, over the lowered potential barrier of donors. It is stressed that no reliable interpretation can avail for the usual modelling of wells, on which Poole-Frenkel (PF) derivations are established. This is so because there does not seem to exist reliable ways of implanting a Coulomb potential well in the gap of a material. In an attempt to bridge the gap between the classical potential-energy approaches and the total-energy approach of Mahapatra and Roy, a Bohr-type model of wells is proposed. In addition, a brief review of quantum treatments of electronic transport in materials is presented, in order to see if more reliable ways of approaching PF effect can be derived on undisputable bases. Finally, it is concluded that, presently, PF effect can be established safely neither theoretically nor experimentally.

  12. Closeout Report ARRA supplement to DE-FG02-08ER41546, 03/15/2010 to 03/14/2011 - Advanced Transfer Map Methods for the Description of Particle Beam Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berz, Martin; Makino, Kyoko

    The ARRA funds were utilized to acquire a cluster of high performance computers, consisting of one Altus 2804 Server based on a Quad AMD Opteron 6174 12C with 4 2.2 GHz nodes of 12 cores each, resulting in 48 directly usable cores; as well as a Relion 1751 Server using an Intel Xeon X5677 consisting of 4 3.46 GHz cores supporting 8 threads. Both systems run the Unix flavor CentOS, which is designed for use without need of updates, which greatly enhances their reliability. The systems are used to operate our COSY INFINITY environment which supports MPI parallelization. The unitsmore » arrived at MSU in September 2010, and were taken into operation shortly thereafter.« less

  13. Using RSAT to scan genome sequences for transcription factor binding sites and cis-regulatory modules.

    PubMed

    Turatsinze, Jean-Valery; Thomas-Chollier, Morgane; Defrance, Matthieu; van Helden, Jacques

    2008-01-01

    This protocol shows how to detect putative cis-regulatory elements and regions enriched in such elements with the regulatory sequence analysis tools (RSAT) web server (http://rsat.ulb.ac.be/rsat/). The approach applies to known transcription factors, whose binding specificity is represented by position-specific scoring matrices, using the program matrix-scan. The detection of individual binding sites is known to return many false predictions. However, results can be strongly improved by estimating P value, and by searching for combinations of sites (homotypic and heterotypic models). We illustrate the detection of sites and enriched regions with a study case, the upstream sequence of the Drosophila melanogaster gene even-skipped. This protocol is also tested on random control sequences to evaluate the reliability of the predictions. Each task requires a few minutes of computation time on the server. The complete protocol can be executed in about one hour.

  14. Assessing the validity and reliability of the Pool Activity Level (PAL) Checklist for use with older people with dementia.

    PubMed

    Wenborn, Jennifer; Challis, David; Pool, Jackie; Burgess, Jane; Elliott, Nicola; Orrell, Martin

    2008-03-01

    Activity is key to maintaining physical and mental health and well-being. However, as dementia affects the ability to engage in activity, care-givers can find it difficult to provide appropriate activities. The Pool Activity Level (PAL) Checklist guides the selection of appropriate, personally meaningful activities. The aim of this study was to assess the reliability and validity of the PAL Checklist when used with older people with dementia. A postal questionnaire sent to activity providers assessed content validity. Validity and reliability were measured in a sample of 60 older people with dementia. The questionnaire response rate was 83% (102/122). Most respondents felt no important items were missing. Seven of the nine activities were ranked as 'very important' or 'essential' by at least 77% of the sample, indicating very good content validity. Correlation with measures of cognition, severity of dementia and activity performance demonstrated strong concurrent validity. Inter-item correlation indicated strong construct validity. Cronbach's alpha coefficient measured internal consistency as excellent (0.95). All items achieved acceptable test-retest reliability, and the majority demonstrated acceptable inter-rater reliability. We conclude that the PAL Checklist demonstrates adequate validity and reliability when used with older people with dementia and appears a useful tool for a variety of care settings.

  15. Multisite Reliability of Cognitive BOLD Data

    PubMed Central

    Brown, Gregory G.; Mathalon, Daniel H.; Stern, Hal; Ford, Judith; Mueller, Bryon; Greve, Douglas N.; McCarthy, Gregory; Voyvodic, Jim; Glover, Gary; Diaz, Michele; Yetter, Elizabeth; Burak Ozyurt, I.; Jorgensen, Kasper W.; Wible, Cynthia G.; Turner, Jessica A.; Thompson, Wesley K.; Potkin, Steven G.

    2010-01-01

    Investigators perform multi-site functional magnetic resonance imaging studies to increase statistical power, to enhance generalizability, and to improve the likelihood of sampling relevant subgroups. Yet undesired site variation in imaging methods could off-set these potential advantages. We used variance components analysis to investigate sources of variation in the blood oxygen level dependent (BOLD) signal across four 3T magnets in voxelwise and region of interest (ROI) analyses. Eighteen participants traveled to four magnet sites to complete eight runs of a working memory task involving emotional or neutral distraction. Person variance was more than 10 times larger than site variance for five of six ROIs studied. Person-by-site interactions, however, contributed sizable unwanted variance to the total. Averaging over runs increased between-site reliability, with many voxels showing good to excellent between-site reliability when eight runs were averaged and regions of interest showing fair to good reliability. Between-site reliability depended on the specific functional contrast analyzed in addition to the number of runs averaged. Although median effect size was correlated with between-site reliability, dissociations were observed for many voxels. Brain regions where the pooled effect size was large but between-site reliability was poor were associated with reduced individual differences. Brain regions where the pooled effect size was small but between-site reliability was excellent were associated with a balance of participants who displayed consistently positive or consistently negative BOLD responses. Although between-site reliability of BOLD data can be good to excellent, acquiring highly reliable data requires robust activation paradigms, ongoing quality assurance, and careful experimental control. PMID:20932915

  16. LabKey Server NAb: A tool for analyzing, visualizing and sharing results from neutralizing antibody assays

    PubMed Central

    2011-01-01

    Background Multiple types of assays allow sensitive detection of virus-specific neutralizing antibodies. For example, the extent of antibody neutralization of HIV-1, SIV and SHIV can be measured in the TZM-bl cell line through the degree of luciferase reporter gene expression after infection. In the past, neutralization curves and titers for this standard assay have been calculated using an Excel macro. Updating all instances of such a macro with new techniques can be unwieldy and introduce non-uniformity across multi-lab teams. Using Excel also poses challenges in centrally storing, sharing and associating raw data files and results. Results We present LabKey Server's NAb tool for organizing, analyzing and securely sharing data, files and results for neutralizing antibody (NAb) assays, including the luciferase-based TZM-bl NAb assay. The customizable tool supports high-throughput experiments and includes a graphical plate template designer, allowing researchers to quickly adapt calculations to new plate layouts. The tool calculates the percent neutralization for each serum dilution based on luminescence measurements, fits a range of neutralization curves to titration results and uses these curves to estimate the neutralizing antibody titers for benchmark dilutions. Results, curve visualizations and raw data files are stored in a database and shared through a secure, web-based interface. NAb results can be integrated with other data sources based on sample identifiers. It is simple to make results public after publication by updating folder security settings. Conclusions Standardized tools for analyzing, archiving and sharing assay results can improve the reproducibility, comparability and reliability of results obtained across many labs. LabKey Server and its NAb tool are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. Many members of the HIV research community can also access the LabKey Server NAb tool without installing the software by using the Atlas Science Portal (https://atlas.scharp.org). Atlas is an installation of LabKey Server. PMID:21619655

  17. Asymptotically reliable transport of multimedia/graphics over wireless channels

    NASA Astrophysics Data System (ADS)

    Han, Richard Y.; Messerschmitt, David G.

    1996-03-01

    We propose a multiple-delivery transport service tailored for graphics and video transported over connections with wireless access. This service operates at the interface between the transport and application layers, balancing the subjective delay and image quality objectives of the application with the low reliability and limited bandwidth of the wireless link. While techniques like forward-error correction, interleaving and retransmission improve reliability over wireless links, they also increase latency substantially when bandwidth is limited. Certain forms of interactive multimedia datatypes can benefit from an initial delivery of a corrupt packet to lower the perceptual latency, as long as reliable delivery occurs eventually. Multiple delivery of successively refined versions of the received packet, terminating when a sufficiently reliable version arrives, exploits the redundancy inherently required to improve reliability without a traffic penalty. Modifications to acknowledgment-repeat-request (ARQ) methods to implement this transport service are proposed, which we term `leaky ARQ'. For the specific case of pixel-coded window-based text/graphics, we describe additional functions needed to more effectively support urgent delivery and asymptotic reliability. X server emulation suggests that users will accept a multi-second delay between a (possibly corrupt) packet and the ultimate reliably-delivered version. The relaxed delay for reliable delivery can be exploited for traffic capacity improvement using scheduling of retransmissions.

  18. A Software Rejuvenation Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  19. The RING 2.0 web server for high quality residue interaction networks.

    PubMed

    Piovesan, Damiano; Minervini, Giovanni; Tosatto, Silvio C E

    2016-07-08

    Residue interaction networks (RINs) are an alternative way of representing protein structures where nodes are residues and arcs physico-chemical interactions. RINs have been extensively and successfully used for analysing mutation effects, protein folding, domain-domain communication and catalytic activity. Here we present RING 2.0, a new version of the RING software for the identification of covalent and non-covalent bonds in protein structures, including π-π stacking and π-cation interactions. RING 2.0 is extremely fast and generates both intra and inter-chain interactions including solvent and ligand atoms. The generated networks are very accurate and reliable thanks to a complex empirical re-parameterization of distance thresholds performed on the entire Protein Data Bank. By default, RING output is generated with optimal parameters but the web server provides an exhaustive interface to customize the calculation. The network can be visualized directly in the browser or in Cytoscape. Alternatively, the RING-Viz script for Pymol allows visualizing the interactions at atomic level in the structure. The web server and RING-Viz, together with an extensive help and tutorial, are available from URL: http://protein.bio.unipd.it/ring. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. CrossVit: enhancing canopy monitoring management practices in viticulture.

    PubMed

    Matese, Alessandro; Vaccari, Francesco Primo; Tomasi, Diego; Di Gennaro, Salvatore Filippo; Primicerio, Jacopo; Sabatini, Francesco; Guidoni, Silvia

    2013-06-13

    A new wireless sensor network (WSN), called CrossVit, and based on MEMSIC products, has been tested for two growing seasons in two vineyards in Italy. The aims are to evaluate the monitoring performances of the new WSN directly in the vineyard and collect air temperature, air humidity and solar radiation data to support vineyard management practices. The WSN consists of various levels: the Master/Gateway level coordinates the WSN and performs data aggregation; the Farm/Server level takes care of storing data on a server, data processing and graphic rendering; Nodes level is based on a network of peripheral nodes consisting of a MDA300 sensor board and Iris module and equipped with thermistors for air temperature, photodiodes for global and diffuse solar radiation, and an HTM2500LF sensor for relative humidity. The communication levels are: WSN links between gateways and sensor nodes by ZigBee, and long-range GSM/GPRS links between gateways and the server farm level. The system was able to monitor the agrometeorological parameters in the vineyard: solar radiation, air temperature and air humidity, detecting the differences between the canopy treatments applied. The performance of CrossVit, in terms of monitoring and reliability of the system, have been evaluated considering: its handiness, cost-effective, non-invasive dimensions and low power consumption.

  1. CrossVit: Enhancing Canopy Monitoring Management Practices in Viticulture

    PubMed Central

    Matese, Alessandro; Vaccari, Francesco Primo; Tomasi, Diego; Di Gennaro, Salvatore Filippo; Primicerio, Jacopo; Sabatini, Francesco; Guidoni, Silvia

    2013-01-01

    A new wireless sensor network (WSN), called CrossVit, and based on MEMSIC products, has been tested for two growing seasons in two vineyards in Italy. The aims are to evaluate the monitoring performances of the new WSN directly in the vineyard and collect air temperature, air humidity and solar radiation data to support vineyard management practices. The WSN consists of various levels: the Master/Gateway level coordinates the WSN and performs data aggregation; the Farm/Server level takes care of storing data on a server, data processing and graphic rendering; Nodes level is based on a network of peripheral nodes consisting of a MDA300 sensor board and Iris module and equipped with thermistors for air temperature, photodiodes for global and diffuse solar radiation, and an HTM2500LF sensor for relative humidity. The communication levels are: WSN links between gateways and sensor nodes by ZigBee, and long-range GSM/GPRS links between gateways and the server farm level. The system was able to monitor the agrometeorological parameters in the vineyard: solar radiation, air temperature and air humidity, detecting the differences between the canopy treatments applied. The performance of CrossVit, in terms of monitoring and reliability of the system, have been evaluated considering: its handiness, cost-effective, non-invasive dimensions and low power consumption. PMID:23765273

  2. Reducing Time to Science: Unidata and JupyterHub Technology Using the Jetstream Cloud

    NASA Astrophysics Data System (ADS)

    Chastang, J.; Signell, R. P.; Fischer, J. L.

    2017-12-01

    Cloud computing can accelerate scientific workflows, discovery, and collaborations by reducing research and data friction. We describe the deployment of Unidata and JupyterHub technologies on the NSF-funded XSEDE Jetstream cloud. With the aid of virtual machines and Docker technology, we deploy a Unidata JupyterHub server co-located with a Local Data Manager (LDM), THREDDS data server (TDS), and RAMADDA geoscience content management system. We provide Jupyter Notebooks and the pre-built Python environments needed to run them. The notebooks can be used for instruction and as templates for scientific experimentation and discovery. We also supply a large quantity of NCEP forecast model results to allow data-proximate analysis and visualization. In addition, users can transfer data using Globus command line tools, and perform their own data-proximate analysis and visualization with Notebook technology. These data can be shared with others via a dedicated TDS server for scientific distribution and collaboration. There are many benefits of this approach. Not only is the cloud computing environment fast, reliable and scalable, but scientists can analyze, visualize, and share data using only their web browser. No local specialized desktop software or a fast internet connection is required. This environment will enable scientists to spend less time managing their software and more time doing science.

  3. The Berg Balance Scale has high intra- and inter-rater reliability but absolute reliability varies across the scale: a systematic review.

    PubMed

    Downs, Stephen; Marquez, Jodie; Chiarelli, Pauline

    2013-06-01

    What is the intra-rater and inter-rater relative reliability of the Berg Balance Scale? What is the absolute reliability of the Berg Balance Scale? Does the absolute reliability of the Berg Balance Scale vary across the scale? Systematic review with meta-analysis of reliability studies. Any clinical population that has undergone assessment with the Berg Balance Scale. Relative intra-rater reliability, relative inter-rater reliability, and absolute reliability. Eleven studies involving 668 participants were included in the review. The relative intrarater reliability of the Berg Balance Scale was high, with a pooled estimate of 0.98 (95% CI 0.97 to 0.99). Relative inter-rater reliability was also high, with a pooled estimate of 0.97 (95% CI 0.96 to 0.98). A ceiling effect of the Berg Balance Scale was evident for some participants. In the analysis of absolute reliability, all of the relevant studies had an average score of 20 or above on the 0 to 56 point Berg Balance Scale. The absolute reliability across this part of the scale, as measured by the minimal detectable change with 95% confidence, varied between 2.8 points and 6.6 points. The Berg Balance Scale has a higher absolute reliability when close to 56 points due to the ceiling effect. We identified no data that estimated the absolute reliability of the Berg Balance Scale among participants with a mean score below 20 out of 56. The Berg Balance Scale has acceptable reliability, although it might not detect modest, clinically important changes in balance in individual subjects. The review was only able to comment on the absolute reliability of the Berg Balance Scale among people with moderately poor to normal balance. Copyright © 2013 Australian Physiotherapy Association. Published by .. All rights reserved.

  4. Turkish Metalinguistic Awareness Scale: A Validity and Reliability Study

    ERIC Educational Resources Information Center

    Varisoglu, Behice

    2018-01-01

    The aim of this study is to develop a useful, valid and reliable measurement tool that will help teacher candidates determine their Turkish metalinguistic awareness. During the development of the scale, a pool of items was created by scanning the relevant literature and examining other awareness scales. The materials prepared were re-examined…

  5. Mining a database of single amplified genomes from Red Sea brine pool extremophiles—improving reliability of gene function prediction using a profile and pattern matching algorithm (PPMA)

    PubMed Central

    Grötzinger, Stefan W.; Alam, Intikhab; Ba Alawi, Wail; Bajic, Vladimir B.; Stingl, Ulrich; Eppinger, Jörg

    2014-01-01

    Reliable functional annotation of genomic data is the key-step in the discovery of novel enzymes. Intrinsic sequencing data quality problems of single amplified genomes (SAGs) and poor homology of novel extremophile's genomes pose significant challenges for the attribution of functions to the coding sequences identified. The anoxic deep-sea brine pools of the Red Sea are a promising source of novel enzymes with unique evolutionary adaptation. Sequencing data from Red Sea brine pool cultures and SAGs are annotated and stored in the Integrated Data Warehouse of Microbial Genomes (INDIGO) data warehouse. Low sequence homology of annotated genes (no similarity for 35% of these genes) may translate into false positives when searching for specific functions. The Profile and Pattern Matching (PPM) strategy described here was developed to eliminate false positive annotations of enzyme function before progressing to labor-intensive hyper-saline gene expression and characterization. It utilizes InterPro-derived Gene Ontology (GO)-terms (which represent enzyme function profiles) and annotated relevant PROSITE IDs (which are linked to an amino acid consensus pattern). The PPM algorithm was tested on 15 protein families, which were selected based on scientific and commercial potential. An initial list of 2577 enzyme commission (E.C.) numbers was translated into 171 GO-terms and 49 consensus patterns. A subset of INDIGO-sequences consisting of 58 SAGs from six different taxons of bacteria and archaea were selected from six different brine pool environments. Those SAGs code for 74,516 genes, which were independently scanned for the GO-terms (profile filter) and PROSITE IDs (pattern filter). Following stringent reliability filtering, the non-redundant hits (106 profile hits and 147 pattern hits) are classified as reliable, if at least two relevant descriptors (GO-terms and/or consensus patterns) are present. Scripts for annotation, as well as for the PPM algorithm, are available through the INDIGO website. PMID:24778629

  6. The USGODAE Monterey Data Server

    NASA Astrophysics Data System (ADS)

    Sharfstein, P. J.; Dimitriou, D.; Hankin, S. C.

    2004-12-01

    With oversight from the U.S. Global Ocean Data Assimilation Experiment (GODAE) Steering Committee and funding from the Office of Naval Research, the USGODAE Monterey Data Server has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. Support of the Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Because of the broad range and diverse formats of data used by the GODAE community, presenting data with a consistent interface and ensuring its availability in standard formats is a primary challenge faced by the USGODAE Server project. To this end, all USGODAE data sets are available via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC). To provide surface forcing, fluxes, and boundary conditions for ocean model research, USGODAE serves global data from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and regional data from the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). Global meteorological data and observational data from the FNMOC Ocean QC process are posted in near real-time to USGODAE. These include T/S profiles, in-situ and satellite sea surface temperature (SST), satellite altimetry, and SSM/I sea ice. They contain all of the unclassified in-situ and satellite observations used to initialize the FNMOC NOGAPS model. Also, the Naval Oceanographic Office provides daily satellite SST and SSH retrievals to USGODAE. The USGODAE Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo T/S profiling float data. USGODAE Argo data are served through OPeNDAP and LAS, providing complete integration into NVODS and the IOOS/DMAC. Due to its high reliability, ease of data access, and increasing breadth of data, the USGODAE Server is becoming an invaluable resource for both the GODAE community and the general oceanographic community. Continued integration of model, forcing, and in-situ data sets from providers throughout the world is making the USGODAE Monterey Data Server a key part of the international GODAE project.

  7. Information content of incubation experiments for inverse estimation of pools in the Rothamsted carbon model: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Scharnagl, Benedikt; Vrugt, Jasper A.; Vereecken, Harry; Herbst, Michael

    2010-05-01

    Turnover of soil organic matter is usually described with multi-compartment models. However, a major drawback of these models is that the conceptually defined compartments (or pools) do not necessarily correspond to measurable soil organic carbon (SOC) fractions in real practice. This not only impairs our ability to rigorously evaluate SOC models but also makes it difficult to derive accurate initial states. In this study, we tested the usefulness and applicability of inverse modeling to derive the various carbon pool sizes in the Rothamsted carbon model (ROTHC) using a synthetic time series of mineralization rates from laboratory incubation. To appropriately account for data and model uncertainty we considered a Bayesian approach using the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. This Markov chain Monte Carlo scheme derives the posterior probability density distribution of the initial pool sizes at the start of incubation from observed mineralization rates. We used the Kullback-Leibler divergence to quantify the information contained in the data and to illustrate the effect of increasing incubation times on the reliability of the pool size estimates. Our results show that measured mineralization rates generally provide sufficient information to reliably estimate the sizes of all active pools in the ROTHC model. However, with about 900 days of incubation, these experiments are excessively long. The use of prior information on microbial biomass provided a way forward to significantly reduce uncertainty and required duration of incubation to about 600 days. Explicit consideration of model parameter uncertainty in the estimation process further impaired the identifiability of initial pools, especially for the more slowly decomposing pools. Our illustrative case studies show how Bayesian inverse modeling can be used to provide important insights into the information content of incubation experiments. Moreover, the outcome of this virtual experiment helps to explain the results of related real-world studies on SOC dynamics.

  8. The SAMS: Smartphone Addiction Management System and verification.

    PubMed

    Lee, Heyoung; Ahn, Heejune; Choi, Samwook; Choi, Wanbok

    2014-01-01

    While the popularity of smartphones has given enormous convenience to our lives, their pathological use has created a new mental health concern among the community. Hence, intensive research is being conducted on the etiology and treatment of the condition. However, the traditional clinical approach based surveys and interviews has serious limitations: health professionals cannot perform continual assessment and intervention for the affected group and the subjectivity of assessment is questionable. To cope with these limitations, a comprehensive ICT (Information and Communications Technology) system called SAMS (Smartphone Addiction Management System) is developed for objective assessment and intervention. The SAMS system consists of an Android smartphone application and a web application server. The SAMS client monitors the user's application usage together with GPS location and Internet access location, and transmits the data to the SAMS server. The SAMS server stores the usage data and performs key statistical data analysis and usage intervention according to the clinicians' decision. To verify the reliability and efficacy of the developed system, a comparison study with survey-based screening with the K-SAS (Korean Smartphone Addiction Scale) as well as self-field trials is performed. The comparison study is done using usage data from 14 users who are 19 to 50 year old adults that left at least 1 week usage logs and completed the survey questionnaires. The field trial fully verified the accuracy of the time, location, and Internet access information in the usage measurement and the reliability of the system operation over more than 2 weeks. The comparison study showed that daily use count has a strong correlation with K-SAS scores, whereas daily use times do not strongly correlate for potentially addicted users. The correlation coefficients of count and times with total K-SAS score are CC = 0.62 and CC =0.07, respectively, and the t-test analysis for the contrast group of potential addicts and the values for the non-addicts were p = 0.047 and p = 0.507, respectively.

  9. A reliable multicast for XTP

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    Multicast services needed for current distributed applications on LAN's fall generally into one of three categories: datagram, semi-reliable, and reliable. Transport layer multicast datagrams represent unreliable service in which the transmitting context 'fires and forgets'. XTP executes these semantics when the MULTI and NOERR mode bits are both set. Distributing sensor data and other applications in which application-level error recovery strategies are appropriate benefit from the efficiency in multidestination delivery offered by datagram service. Semi-reliable service refers to multicasting in which the control algorithms of the transport layer--error, flow, and rate control--are used in transferring the multicast distribution to the set of receiving contexts, the multicast group. The multicast defined in XTP provides semi-reliable service. Since, under a semi-reliable service, joining a multicast group means listening on the group address and entails no coordination with other members, a semi-reliable facility can be used for communication between a client and a server group as well as true peer-to-peer group communication. Resource location in a LAN is an important application domain. The term 'semi-reliable' refers to the fact that group membership changes go undetected. No attempt is made to assess the current membership of the group at any time--before, during, or after--the data transfer.

  10. Risk Assessment of the Naval Postgraduate School Gigabit Network

    DTIC Science & Technology

    2004-09-01

    Management Server (1) • Ras Server (1) • Remedy Server (1) • Samba Server(2) • SQL Servers (3) • Web Servers (3) • WINS Server (1) • Library...Server Bob Sharp INCA Windows 2000 Advanced Server NPGS Landesk SQL 2000 Alan Pires eagle Microsoft Windows 2000 Advanced Server EWS NPGS Landesk...Advanced Server Special Projects NPGS SQL Alan Pires MC01BDB Microsoft Windows 2000 Advanced Server Special Projects NPGS SQL 2000 Alan Pires

  11. Research Review: Test-retest reliability of standardized diagnostic interviews to assess child and adolescent psychiatric disorders: a systematic review and meta-analysis.

    PubMed

    Duncan, Laura; Comeau, Jinette; Wang, Li; Vitoroulis, Irene; Boyle, Michael H; Bennett, Kathryn

    2018-02-19

    A better understanding of factors contributing to the observed variability in estimates of test-retest reliability in published studies on standardized diagnostic interviews (SDI) is needed. The objectives of this systematic review and meta-analysis were to estimate the pooled test-retest reliability for parent and youth assessments of seven common disorders, and to examine sources of between-study heterogeneity in reliability. Following a systematic review of the literature, multilevel random effects meta-analyses were used to analyse 202 reliability estimates (Cohen's kappa = ҡ) from 31 eligible studies and 5,369 assessments of 3,344 children and youth. Pooled reliability was moderate at ҡ = .58 (CI 95% 0.53-0.63) and between-study heterogeneity was substantial (Q = 2,063 (df = 201), p < .001 and I 2  = 79%). In subgroup analysis, reliability varied across informants for specific types of psychiatric disorder (ҡ = .53-.69 for parent vs. ҡ = .39-.68 for youth) with estimates significantly higher for parents on attention deficit hyperactivity disorder, oppositional defiant disorder and the broad groupings of externalizing and any disorder. Reliability was also significantly higher in studies with indicators of poor or fair study methodology quality (sample size <50, retest interval <7 days). Our findings raise important questions about the meaningfulness of published evidence on the test-retest reliability of SDIs and the usefulness of these tools in both clinical and research contexts. Potential remedies include the introduction of standardized study and reporting requirements for reliability studies, and exploration of other approaches to assessing and classifying child and adolescent psychiatric disorder. © 2018 Association for Child and Adolescent Mental Health.

  12. Optimal Discrete Event Supervisory Control of Aircraft Gas Turbine Engines

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan (Technical Monitor); Ray, Asok

    2004-01-01

    This report presents an application of the recently developed theory of optimal Discrete Event Supervisory (DES) control that is based on a signed real measure of regular languages. The DES control techniques are validated on an aircraft gas turbine engine simulation test bed. The test bed is implemented on a networked computer system in which two computers operate in the client-server mode. Several DES controllers have been tested for engine performance and reliability.

  13. WAN Optimization: A Business Process Reengineering and Knowledge Value Added Approach

    DTIC Science & Technology

    2011-03-01

    processing is not affected. Reliability The Customer or Order systems are unavailable. If either fails, order processing halts and alerts are...online immediately, and sends a fax to the customer who orders the container. The whole order processing process can be completed in one day. IT plays...Messages build up in the OrderQ until the email server restarts. Messages are then sent by the SendEmail component to remove the backlog. Order

  14. Time to competency, reliability of flexible transnasal laryngoscopy by training level: a pilot study.

    PubMed

    Brook, Christopher D; Platt, Michael P; Russell, Kimberly; Grillone, Gregory A; Aliphas, Avner; Noordzij, J Pieter

    2015-05-01

    To determine the progression of flexible transnasal laryngoscopy reliability and competency in otolaryngology residency training. Prospective case control study. Academic otolaryngology department. Medical students, otolaryngology residents, and otolaryngology attending physicians. Fourteen otolaryngology residents from PGY-1 to PGY-5 and 3 attending otolaryngologists viewed 25 selected and digitally recorded flexible transnasal laryngoscopies. The evaluators were asked to rate 13 items relating to abnormalities in the oropharynx, hypopharynx, larynx, and subglottis. The level of concern and level of comfort with the diagnosis were assessed. Intraclass correlations were calculated for each topic and by level of training to determine reliability within each class and compare competency versus attending interpretations. Intraclass correlation of residents compared to attending physicians demonstrated significant improvements by year for left and right vocal fold immobility, subglottic stenosis, laryngeal mass, left and right vocal cord abnormalities, and level of concern. Additionally, pooled vocal cord mobility and pooled results in categories with good attending reliability demonstrated stepwise improvement as well. For these categories, resident reliability was found to be statistically similar to attending physicians in all categories by PGY-3. There were no trends for base of tongue abnormalities, pharyngeal abnormalities, and pharyngeal and hypopharyngeal masses. Resident competency for flexible transnasal laryngoscopy progresses during residency to reliability with attending otolaryngologists by the PGY-3 year over key facets of the examination. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2015.

  15. Stroke Treatment Academic Industry Roundtable Recommendations for Individual Data Pooling Analyses in Stroke.

    PubMed

    Lees, Kennedy R; Khatri, Pooja

    2016-08-01

    Pooled analysis of individual patient data from stroke trials can deliver more precise estimates of treatment effect, enhance power to examine prespecified subgroups, and facilitate exploration of treatment-modifying influences. Analysis plans should be declared, and preferably published, before trial results are known. For pooling trials that used diverse analytic approaches, an ordinal analysis is favored, with justification for considering deaths and severe disability jointly. Because trial pooling is an incremental process, analyses should follow a sequential approach, with statistical adjustment for iterations. Updated analyses should be published when revised conclusions have a clinical implication. However, caution is recommended in declaring pooled findings that may prejudice ongoing trials, unless clinical implications are compelling. All contributing trial teams should contribute to leadership, data verification, and authorship of pooled analyses. Development work is needed to enable reliable inferences to be drawn about individual drug or device effects that contribute to a pooled analysis, versus a class effect, if the treatment strategy combines ≥2 such drugs or devices. Despite the practical challenges, pooled analyses are powerful and essential tools in interpreting clinical trial findings and advancing clinical care. © 2016 American Heart Association, Inc.

  16. HomPPI: a class of sequence homology based protein-protein interface prediction methods

    PubMed Central

    2011-01-01

    Background Although homology-based methods are among the most widely used methods for predicting the structure and function of proteins, the question as to whether interface sequence conservation can be effectively exploited in predicting protein-protein interfaces has been a subject of debate. Results We studied more than 300,000 pair-wise alignments of protein sequences from structurally characterized protein complexes, including both obligate and transient complexes. We identified sequence similarity criteria required for accurate homology-based inference of interface residues in a query protein sequence. Based on these analyses, we developed HomPPI, a class of sequence homology-based methods for predicting protein-protein interface residues. We present two variants of HomPPI: (i) NPS-HomPPI (Non partner-specific HomPPI), which can be used to predict interface residues of a query protein in the absence of knowledge of the interaction partner; and (ii) PS-HomPPI (Partner-specific HomPPI), which can be used to predict the interface residues of a query protein with a specific target protein. Our experiments on a benchmark dataset of obligate homodimeric complexes show that NPS-HomPPI can reliably predict protein-protein interface residues in a given protein, with an average correlation coefficient (CC) of 0.76, sensitivity of 0.83, and specificity of 0.78, when sequence homologs of the query protein can be reliably identified. NPS-HomPPI also reliably predicts the interface residues of intrinsically disordered proteins. Our experiments suggest that NPS-HomPPI is competitive with several state-of-the-art interface prediction servers including those that exploit the structure of the query proteins. The partner-specific classifier, PS-HomPPI can, on a large dataset of transient complexes, predict the interface residues of a query protein with a specific target, with a CC of 0.65, sensitivity of 0.69, and specificity of 0.70, when homologs of both the query and the target can be reliably identified. The HomPPI web server is available at http://homppi.cs.iastate.edu/. Conclusions Sequence homology-based methods offer a class of computationally efficient and reliable approaches for predicting the protein-protein interface residues that participate in either obligate or transient interactions. For query proteins involved in transient interactions, the reliability of interface residue prediction can be improved by exploiting knowledge of putative interaction partners. PMID:21682895

  17. Test-bed for the remote health monitoring system for bridge structures using FBG sensors

    NASA Astrophysics Data System (ADS)

    Lee, Chin-Hyung; Park, Ki-Tae; Joo, Bong-Chul; Hwang, Yoon-Koog

    2009-05-01

    This paper reports on test-bed for the long-term health monitoring system for bridge structures employing fiber Bragg grating (FBG) sensors, which is remotely accessible via the web, to provide real-time quantitative information on a bridge's response to live loading and environmental changes, and fast prediction of the structure's integrity. The sensors are attached on several locations of the structure and connected to a data acquisition system permanently installed onsite. The system can be accessed through remote communication using an optical cable network, through which the evaluation of the bridge behavior under live loading can be allowed at place far away from the field. Live structural data are transmitted continuously to the server computer at the central office. The server computer is connected securely to the internet, where data can be retrieved, processed and stored for the remote web-based health monitoring. Test-bed revealed that the remote health monitoring technology will enable practical, cost-effective, and reliable condition assessment and maintenance of bridge structures.

  18. Advanced Engineering Environment FY09/10 pilot project.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamph, Jane Ann; Kiba, Grant W.; Pomplun, Alan R.

    2010-06-01

    The Advanced Engineering Environment (AEE) project identifies emerging engineering environment tools and assesses their value to Sandia National Laboratories and our partners in the Nuclear Security Enterprise (NSE) by testing them in our design environment. This project accomplished several pilot activities, including: the preliminary definition of an engineering bill of materials (BOM) based product structure in the Windchill PDMLink 9.0 application; an evaluation of Mentor Graphics Data Management System (DMS) application for electrical computer-aided design (ECAD) library administration; and implementation and documentation of a Windchill 9.1 application upgrade. The project also supported the migration of legacy data from existing corporatemore » product lifecycle management systems into new classified and unclassified Windchill PDMLink 9.0 systems. The project included two infrastructure modernization efforts: the replacement of two aging AEE development servers for reliable platforms for ongoing AEE project work; and the replacement of four critical application and license servers that support design and engineering work at the Sandia National Laboratories/California site.« less

  19. Smart Grid Privacy through Distributed Trust

    NASA Astrophysics Data System (ADS)

    Lipton, Benjamin

    Though the smart electrical grid promises many advantages in efficiency and reliability, the risks to consumer privacy have impeded its deployment. Researchers have proposed protecting privacy by aggregating user data before it reaches the utility, using techniques of homomorphic encryption to prevent exposure of unaggregated values. However, such schemes generally require users to trust in the correct operation of a single aggregation server. We propose two alternative systems based on secret sharing techniques that distribute this trust among multiple service providers, protecting user privacy against a misbehaving server. We also provide an extensive evaluation of the systems considered, comparing their robustness to privacy compromise, error handling, computational performance, and data transmission costs. We conclude that while all the systems should be computationally feasible on smart meters, the two methods based on secret sharing require much less computation while also providing better protection against corrupted aggregators. Building systems using these techniques could help defend the privacy of electricity customers, as well as customers of other utilities as they move to a more data-driven architecture.

  20. Template-based protein structure modeling using the RaptorX web server.

    PubMed

    Källberg, Morten; Wang, Haipeng; Wang, Sheng; Peng, Jian; Wang, Zhiyong; Lu, Hui; Xu, Jinbo

    2012-07-19

    A key challenge of modern biology is to uncover the functional role of the protein entities that compose cellular proteomes. To this end, the availability of reliable three-dimensional atomic models of proteins is often crucial. This protocol presents a community-wide web-based method using RaptorX (http://raptorx.uchicago.edu/) for protein secondary structure prediction, template-based tertiary structure modeling, alignment quality assessment and sophisticated probabilistic alignment sampling. RaptorX distinguishes itself from other servers by the quality of the alignment between a target sequence and one or multiple distantly related template proteins (especially those with sparse sequence profiles) and by a novel nonlinear scoring function and a probabilistic-consistency algorithm. Consequently, RaptorX delivers high-quality structural models for many targets with only remote templates. At present, it takes RaptorX ~35 min to finish processing a sequence of 200 amino acids. Since its official release in August 2011, RaptorX has processed ~6,000 sequences submitted by ~1,600 users from around the world.

  1. Sequence-Based Prediction of RNA-Binding Residues in Proteins.

    PubMed

    Walia, Rasna R; El-Manzalawy, Yasser; Honavar, Vasant G; Dobbs, Drena

    2017-01-01

    Identifying individual residues in the interfaces of protein-RNA complexes is important for understanding the molecular determinants of protein-RNA recognition and has many potential applications. Recent technical advances have led to several high-throughput experimental methods for identifying partners in protein-RNA complexes, but determining RNA-binding residues in proteins is still expensive and time-consuming. This chapter focuses on available computational methods for identifying which amino acids in an RNA-binding protein participate directly in contacting RNA. Step-by-step protocols for using three different web-based servers to predict RNA-binding residues are described. In addition, currently available web servers and software tools for predicting RNA-binding sites, as well as databases that contain valuable information about known protein-RNA complexes, RNA-binding motifs in proteins, and protein-binding recognition sites in RNA are provided. We emphasize sequence-based methods that can reliably identify interfacial residues without the requirement for structural information regarding either the RNA-binding protein or its RNA partner.

  2. Design of Instant Messaging System of Multi-language E-commerce Platform

    NASA Astrophysics Data System (ADS)

    Yang, Heng; Chen, Xinyi; Li, Jiajia; Cao, Yaru

    2017-09-01

    This paper aims at researching the message system in the instant messaging system based on the multi-language e-commerce platform in order to design the instant messaging system in multi-language environment and exhibit the national characteristics based information as well as applying national languages to e-commerce. In order to develop beautiful and friendly system interface for the front end of the message system and reduce the development cost, the mature jQuery framework is adopted in this paper. The high-performance server Tomcat is adopted at the back end to process user requests, and MySQL database is adopted for data storage to persistently store user data, and meanwhile Oracle database is adopted as the message buffer for system optimization. Moreover, AJAX technology is adopted for the client to actively pull the newest data from the server at the specified time. In practical application, the system has strong reliability, good expansibility, short response time, high system throughput capacity and high user concurrency.

  3. Sequence-Based Prediction of RNA-Binding Residues in Proteins

    PubMed Central

    Walia, Rasna R.; EL-Manzalawy, Yasser; Honavar, Vasant G.; Dobbs, Drena

    2017-01-01

    Identifying individual residues in the interfaces of protein–RNA complexes is important for understanding the molecular determinants of protein–RNA recognition and has many potential applications. Recent technical advances have led to several high-throughput experimental methods for identifying partners in protein–RNA complexes, but determining RNA-binding residues in proteins is still expensive and time-consuming. This chapter focuses on available computational methods for identifying which amino acids in an RNA-binding protein participate directly in contacting RNA. Step-by-step protocols for using three different web-based servers to predict RNA-binding residues are described. In addition, currently available web servers and software tools for predicting RNA-binding sites, as well as databases that contain valuable information about known protein–RNA complexes, RNA-binding motifs in proteins, and protein-binding recognition sites in RNA are provided. We emphasize sequence-based methods that can reliably identify interfacial residues without the requirement for structural information regarding either the RNA-binding protein or its RNA partner. PMID:27787829

  4. Template-based protein structure modeling using the RaptorX web server

    PubMed Central

    Källberg, Morten; Wang, Haipeng; Wang, Sheng; Peng, Jian; Wang, Zhiyong; Lu, Hui; Xu, Jinbo

    2016-01-01

    A key challenge of modern biology is to uncover the functional role of the protein entities that compose cellular proteomes. To this end, the availability of reliable three-dimensional atomic models of proteins is often crucial. This protocol presents a community-wide web-based method using RaptorX (http://raptorx.uchicago.edu/) for protein secondary structure prediction, template-based tertiary structure modeling, alignment quality assessment and sophisticated probabilistic alignment sampling. RaptorX distinguishes itself from other servers by the quality of the alignment between a target sequence and one or multiple distantly related template proteins (especially those with sparse sequence profiles) and by a novel nonlinear scoring function and a probabilistic-consistency algorithm. Consequently, RaptorX delivers high-quality structural models for many targets with only remote templates. At present, it takes RaptorX ~35 min to finish processing a sequence of 200 amino acids. Since its official release in August 2011, RaptorX has processed ~6,000 sequences submitted by ~1,600 users from around the world. PMID:22814390

  5. [The Key Technology Study on Cloud Computing Platform for ECG Monitoring Based on Regional Internet of Things].

    PubMed

    Yang, Shu; Qiu, Yuyan; Shi, Bo

    2016-09-01

    This paper explores the methods of building the internet of things of a regional ECG monitoring, focused on the implementation of ECG monitoring center based on cloud computing platform. It analyzes implementation principles of automatic identifi cation in the types of arrhythmia. It also studies the system architecture and key techniques of cloud computing platform, including server load balancing technology, reliable storage of massive smalfi les and the implications of quick search function.

  6. Experience in Construction and Operation of the Distributed Information Systems on the Basis of the Z39.50 Protocol

    NASA Astrophysics Data System (ADS)

    Zhizhimov, Oleg; Mazov, Nikolay; Skibin, Sergey

    Questions concerned with construction and operation of the distributed information systems on the basis of ANSI/NISO Z39.50 Information Retrieval Protocol are discussed in the paper. The paper is based on authors' practice in developing ZooPARK server. Architecture of distributed information systems, questions of reliability of such systems, minimization of search time and administration are examined. Problems with developing of distributed information systems are also described.

  7. [Implementation of ECG Monitoring System Based on Internet of Things].

    PubMed

    Lu, Liangliang; Chen, Minya

    2015-11-01

    In order to expand the capabilities of hospital's traditional ECG device and enhance medical staff's work efficiency, an ECG monitoring system based on internet of things is introduced. The system can monitor ECG signals in real time and analyze data using ECG sensor, PDA, Web servers, which embeds C language, Android systems, .NET, wireless network and other technologies. After experiments, it can be showed that the system has high reliability and stability and can bring the convenience to medical staffs.

  8. Analysis of electric power industry restructuring

    NASA Astrophysics Data System (ADS)

    Al-Agtash, Salem Yahya

    1998-10-01

    This thesis evaluates alternative structures of the electric power industry in a competitive environment. One structure is based on the principle of creating a mandatory power pool to foster competition and manage system economics. The structure is PoolCo (pool coordination). A second structure is based on the principle of allowing independent multilateral trading and decentralized market coordination. The structure is DecCo (decentralized coordination). The criteria I use to evaluate these two structures are: economic efficiency, system reliability and freedom of choice. Economic efficiency evaluation considers strategic behavior of individual generators as well as behavioral variations of different classes of consumers. A supply-function equilibria model is characterized for deriving bidding strategies of competing generators under PoolCo. It is shown that asymmetric equilibria can exist within the capacities of generators. An augmented Lagrangian approach is introduced to solve iteratively for global optimal operations schedules. Under DecCo, the process involves solving iteratively for system operations schedules. The schedules reflect generators strategic behavior and brokers' interactions for arranging profitable trades, allocating losses and managing network congestion. In the determination of PoolCo and DecCo operations schedules, overall costs of power generation (start-up and shut-down costs and availability of hydro electric power) as well as losses and costs of transmission network are considered. For system reliability evaluation, I examine the effect of PoolCo and DecCo operating conditions on the system security. Random component failure perturbations are generated to simulate the actual system behavior. This is done using Monte Carlo simulation. Freedom of choice evaluation accounts for schemes' beneficial opportunities and capabilities to respond to consumers expressed preferences. An IEEE 24-bus test system is used to illustrate the concepts developed for economic efficiency evaluation. The system was tested over two years time period. The results indicate 2.6684 and 2.7269 percent of efficiency loss on average for PoolCo and DecCo, respectively. These values, however, do not represent forecasts of efficiency losses of PoolCo- and DecCo-based competitive industries. Rather, they are illustrations of the efficiency losses for the given IEEE test system and based on the modeling assumptions underlying framework development.

  9. Investigation of rare and low-frequency variants using high-throughput sequencing with pooled DNA samples

    PubMed Central

    Wang, Jingwen; Skoog, Tiina; Einarsdottir, Elisabet; Kaartokallio, Tea; Laivuori, Hannele; Grauers, Anna; Gerdhem, Paul; Hytönen, Marjo; Lohi, Hannes; Kere, Juha; Jiao, Hong

    2016-01-01

    High-throughput sequencing using pooled DNA samples can facilitate genome-wide studies on rare and low-frequency variants in a large population. Some major questions concerning the pooling sequencing strategy are whether rare and low-frequency variants can be detected reliably, and whether estimated minor allele frequencies (MAFs) can represent the actual values obtained from individually genotyped samples. In this study, we evaluated MAF estimates using three variant detection tools with two sets of pooled whole exome sequencing (WES) and one set of pooled whole genome sequencing (WGS) data. Both GATK and Freebayes displayed high sensitivity, specificity and accuracy when detecting rare or low-frequency variants. For the WGS study, 56% of the low-frequency variants in Illumina array have identical MAFs and 26% have one allele difference between sequencing and individual genotyping data. The MAF estimates from WGS correlated well (r = 0.94) with those from Illumina arrays. The MAFs from the pooled WES data also showed high concordance (r = 0.88) with those from the individual genotyping data. In conclusion, the MAFs estimated from pooled DNA sequencing data reflect the MAFs in individually genotyped samples well. The pooling strategy can thus be a rapid and cost-effective approach for the initial screening in large-scale association studies. PMID:27633116

  10. Hardware Assisted Stealthy Diversity (CHECKMATE)

    DTIC Science & Technology

    2013-09-01

    applicable across multiple architectures. Figure 29 shows an example an attack against an interpreted environment with a Java executable. CHECKMATE can...Architectures ARM PPCx86 Java VM Java VMJava VM Java Executable Attack APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 33 a user executes “/usr/bin/wget...Server 1 - Administration Server 2 – Database ( mySQL ) Server 3 – Web server (Mongoose) Server 4 – File server (SSH) Server 5 – Email server

  11. Monitoring Physicochemical and Nutrient Dynamics Along a Development Gradient in Maine Ephemeral Wetlands

    NASA Astrophysics Data System (ADS)

    Podzikowski, L. Y.; Capps, K. A.; Calhoun, A.

    2014-12-01

    Vernal pools are ephemeral wetlands in forested landscapes that fill with snowmelt, precipitation, and/or groundwater in the spring, and characteristically dry down through the summer months. Typically, vernal pool research has focused on the population and community ecology of pool-breeding organisms (amphibians and macroinvertebrates) conducted during their relatively short breeding season. Yet, little is known about the temporal variability of biogeochemical processes within and among vernal pools in urbanizing landscapes. In this study, we monitored physicochemical characteristics and nutrient dynamics in 22 vernal pools in central Maine post thaw in 2014. Four pristine pools were sampled weekly in five locations within the pool for ambient nutrient concentrations (SRP, NH4, NOx) and at three locations for physicochemical characteristics (DO, pH, temperature, conductivity). In the remaining 18 pools, we sampled one location for nutrients and three locations for physicochemical characteristics at least monthly to estimate the influence of increasing urbanization on the physical and chemical environment. Our data suggest most pools found in urbanizing areas have higher conductivity (developed sites ranging 18.52 - 1238 μS cm-1 compared to pristine between 14.08 - 58.4 μS cm-1). Previous work suggests forested pools exhibit dystrophic conditions with high coloration from DOC limiting primary production due to increased light attenuation in pools. However, both pristine and urban pools experienced spikes in DO (>100% saturation) throughout the day, suggesting that high productivity is not a reliable indicator of the effects of urbanization on vernal pools. We argue that continued monitoring of vernal pools along a gradient of urbanization could give insight into the role of ephemeral wetlands as potential biogeochemical hotspots and may also indicate how human development may alter biogeochemical cycling in ephemeral wetlands.

  12. The Helioviewer Project: Solar Data Visualization and Exploration

    NASA Astrophysics Data System (ADS)

    Hughitt, V. Keith; Ireland, J.; Müller, D.; García Ortiz, J.; Dimitoglou, G.; Fleck, B.

    2011-05-01

    SDO has only been operating a little over a year, but in that short time it has already transmitted hundreds of terabytes of data, making it impossible for data providers to maintain a complete archive of data online. By storing an extremely efficiently compressed subset of the data, however, the Helioviewer project has been able to maintain a continuous record of high-quality SDO images starting from soon after the commissioning phase. The Helioviewer project was not designed to deal with SDO alone, however, and continues to add support for new types of data, the most recent of which are STEREO EUVI and COR1/COR2 images. In addition to adding support for new types of data, improvements have been made to both the server-side and client-side products that are part of the project. A new open-source JPEG2000 (JPIP) streaming server has been developed offering a vastly more flexible and reliable backend for the Java/OpenGL application JHelioviewer. Meanwhile the web front-end, Helioviewer.org, has also made great strides both in improving reliability, and also in adding new features such as the ability to create and share movies on YouTube. Helioviewer users are creating nearly two thousand movies a day from the over six million images that are available to them, and that number continues to grow each day. We provide an overview of recent progress with the various Helioviewer Project components and discuss plans for future development.

  13. A mobile field-work data collection system for the wireless era of health surveillance.

    PubMed

    Forsell, Marianne; Sjögren, Petteri; Renard, Matthew; Johansson, Olle

    2011-03-01

    In many countries or regions the capacity of health care resources is below the needs of the population and new approaches for health surveillance are needed. Innovative projects, utilizing wireless communication technology, contribute to reliable methods for field-work data collection and reporting to databases. The objective was to describe a new version of a wireless IT-support system for field-work data collection and administration. The system requirements were drawn from the design objective and translated to system functions. The system architecture was based on fieldwork experiences and administrative requirements. The Smartphone devices were HTC Touch Diamond2s, while the system was based on a platform with Microsoft .NET components, and a SQL Server 2005 with Microsoft Windows Server 2003 operating system. The user interfaces were based on .NET programming, and Microsoft Windows Mobile operating system. A synchronization module enabled download of field data to the database, via a General Packet Radio Services (GPRS) to a Local Area Network (LAN) interface. The field-workers considered the here-described applications user-friendly and almost self-instructing. The office administrators considered that the back-office interface facilitated retrieval of health reports and invoice distribution. The current IT-support system facilitates short lead times from fieldwork data registration to analysis, and is suitable for various applications. The advantages of wireless technology, and paper-free data administration need to be increasingly emphasized in development programs, in order to facilitate reliable and transparent use of limited resources.

  14. Robotic tape library system level testing at NSA: Present and planned

    NASA Technical Reports Server (NTRS)

    Shields, Michael F.

    1994-01-01

    In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.

  15. The central equipment pool, an opportunity for improved technology management.

    PubMed

    Gentles, W M

    2000-01-01

    A model for a central equipment pool managed by a clinical engineering department has been presented. The advantages to patient care and to the clinical engineering department are many. The distribution of portable technology that has been traditionally managed by the materials management function is a logical match to the expanding role of clinical engineering departments in technology management. Accurate asset management tools have allowed us to provide reliable measures of infusion pump utilization, permitting us to predict future needs as programs expand. Thus we are more actively involved in strategic technology planning. The central equipment pool is an excellent opportunity for the clinical engineering department to increase its technology management activities.

  16. Effect of thermal cycling ramp rate on CSP assembly reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.

    2001-01-01

    A JPL-led chip scale package consortium of enterprises recently joined together to pool in-kind resources for developing the quality and reliability of chip scale packages for a variety of projects. The experience of the consortium in building more than 150 test vehicle assemblies, single and double sided multilayer PWBs, and the environmental test results has now been published as a chip scale package guidelines document.

  17. Information content of incubation experiments for inverse estimation of pools in the Rothamsted carbon model: a Bayesian perspective

    NASA Astrophysics Data System (ADS)

    Scharnagl, B.; Vrugt, J. A.; Vereecken, H.; Herbst, M.

    2010-02-01

    A major drawback of current soil organic carbon (SOC) models is that their conceptually defined pools do not necessarily correspond to measurable SOC fractions in real practice. This not only impairs our ability to rigorously evaluate SOC models but also makes it difficult to derive accurate initial states of the individual carbon pools. In this study, we tested the feasibility of inverse modelling for estimating pools in the Rothamsted carbon model (ROTHC) using mineralization rates observed during incubation experiments. This inverse approach may provide an alternative to existing SOC fractionation methods. To illustrate our approach, we used a time series of synthetically generated mineralization rates using the ROTHC model. We adopted a Bayesian approach using the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to infer probability density functions of the various carbon pools at the start of incubation. The Kullback-Leibler divergence was used to quantify the information content of the mineralization rate data. Our results indicate that measured mineralization rates generally provided sufficient information to reliably estimate all carbon pools in the ROTHC model. The incubation time necessary to appropriately constrain all pools was about 900 days. The use of prior information on microbial biomass carbon significantly reduced the uncertainty of the initial carbon pools, decreasing the required incubation time to about 600 days. Simultaneous estimation of initial carbon pools and decomposition rate constants significantly increased the uncertainty of the carbon pools. This effect was most pronounced for the intermediate and slow pools. Altogether, our results demonstrate that it is particularly difficult to derive reasonable estimates of the humified organic matter pool and the inert organic matter pool from inverse modelling of mineralization rates observed during incubation experiments.

  18. Effect of pooled comparative information on judgments of quality

    PubMed Central

    Baumgart, Leigh A.; Bass, Ellen J.; Voss, John D.; Lyman, Jason A.

    2015-01-01

    Quality assessment is the focus of many health care initiatives. Yet it is not well understood how the type of information used in decision support tools to enable judgments of quality based on data impacts the accuracy, consistency and reliability of judgments made by physicians. Comparative pooled information could allow physicians to judge the quality of their practice by making comparisons to other practices or other specific populations of patients. In this study, resident physicians were provided with varying types of information derived from pooled patient data sets: quality component measures at the individual and group level, a qualitative interpretation of the quality measures using percentile rank, and an aggregate composite quality score. 32 participants viewed thirty quality profiles consisting of information applicable to the practice of thirty de-identified resident physicians. Those provided with quality component measures and a qualitative interpretation of the quality measures (rankings) judged quality of care more similarly to experts and were more internally consistent compared to participants who were provided with quality component measures alone. Reliability between participants was significantly less for those who were provided with a composite quality score compared to those who were not. PMID:26949581

  19. Mobile field data acquisition in geosciences

    NASA Astrophysics Data System (ADS)

    Golodoniuc, Pavel; Klump, Jens; Reid, Nathan; Gray, David

    2016-04-01

    The Discovering Australia's Mineral Resources Program of CSIRO is conducting a study to develop novel methods and techniques to reliably define distal footprints of mineral systems under regolith cover in the Capricorn Orogen - the area that lies between two well-known metallogenic provinces of Pilbara and Yilgarn Cratons in Western Australia. The multidisciplinary study goes beyond the boundaries of a specific discipline and aims at developing new methods to integrate heterogeneous datasets to gain insight into the key indicators of mineralisation. The study relies on large regional datasets obtained from previous hydrogeochemical, regolith, and resistate mineral studies around known deposits, as well as new data obtained from the recent field sampling campaigns around areas of interest. With thousands of water, vegetation, rock and soil samples collected over the past years, it has prompted us to look at ways to standardise field sampling procedures and review the data acquisition process. This process has evolved over the years (Golodoniuc et al., 2015; Klump et al., 2015) and has now reached the phase where fast and reliable collection of scientific data in remote areas is possible. The approach is backed by a unified discipline-agnostic platform - the Federated Archaeological Information Management System (FAIMS). FAIMS is an open source framework for mobile field data acquisition, developed at the University of New South Wales for archaeological field data collection. The FAIMS framework can easily be adapted to a diverse range of scenarios, different kinds of samples, each with its own peculiarities, integration with GPS, and the ability to associate photographs taken with the device embedded camera with captured data. Three different modules have been developed so far, dedicated to geochemical water, plant and rock sampling. All modules feature automatic date and position recording, and reproduce the established data recording workflows. The rock sampling module also features an interactive GIS component allowing to enter field observations as annotations to a map. The open communication protocols and file formats used by FAIMS modules allow easy integration with existing spatial data infrastructures and third-party applications, such as ArcGIS. The remoteness of the focus areas in the Capricorn region required reliable mechanisms for data replication and an added level of redundancy. This was achieved through the use of the FAIMS Server without adding a tightly coupled dependency on it - the mobile devices could continue to work independently in the case the server fails. To support collaborative fieldwork, "FAIMS on a Truck" offers networked collaboration within a field team using mobile applications as asynchronous rich clients. The framework runs on compatible Android devices (e.g., tablets, smart phones) with the network infrastructure supported by a FAIMS Server. The server component is installed in a field vehicle to provide data synchronisation between multiple mobile devices, backup and data transfer. The data entry process was streamlined and followed the workflow that field crews were accustomed to with added data validation capabilities. The use of a common platform allowed us to adopt the framework within multiple disciplines, improve data acquisition times, and reduce human-introduced errors. We continue to work with other research groups and continue to explore the possibilities to adopt the technology in other applications, e.g., agriculture.

  20. Network characteristics for server selection in online games

    NASA Astrophysics Data System (ADS)

    Claypool, Mark

    2008-01-01

    Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.

  1. Impacts and Benefits of a Satellite Power System on the Electric Utility Industry

    NASA Technical Reports Server (NTRS)

    Winer, B. M.

    1977-01-01

    The purpose of this limited study was to investigate six specific issues associated with interfacing a Satellite Power System (5 GW) with large (by present standards) terrestrial power pools to a depth sufficient to determine if certain interface problems and/or benefits exist and what future studies of these problems are required. The issues investigated are as follows: (1) Stability of Power Pools Containing a 5 GWe SPS; (2) Extra Reserve Margin Required to Maintain the Reliability of Power Pools Containing a 5 GWe SPS; (3) Use of the SPS in Load Following Service (i.e. in two independent pools whose times of peak demand differ by three hours); (4) Ownership of the SPS and its effect on SPS Usage and Utility Costs; (5) Utility Sharing of SPS related RD and D Costs; (6) Utility Liability for SPS Related Hazards.

  2. Pyglidein - A Simple HTCondor Glidein Service

    NASA Astrophysics Data System (ADS)

    Schultz, D.; Riedel, B.; Merino, G.

    2017-10-01

    A major challenge for data processing and analysis at the IceCube Neutrino Observatory presents itself in connecting a large set of individual clusters together to form a computing grid. Most of these clusters do not provide a “standard” grid interface. Using a local account on each submit machine, HTCondor glideins can be submitted to virtually any type of scheduler. The glideins then connect back to a main HTCondor pool, where jobs can run normally with no special syntax. To respond to dynamic load, a simple server advertises the number of idle jobs in the queue and the resources they request. The submit script can query this server to optimize glideins to what is needed, or not submit if there is no demand. Configuring HTCondor dynamic slots in the glideins allows us to efficiently handle varying memory requirements as well as whole-node jobs. One step of the IceCube simulation chain, photon propagation in the ice, heavily relies on GPUs for faster execution. Therefore, one important requirement for any workload management system in IceCube is to handle GPU resources properly. Within the pyglidein system, we have successfully configured HTCondor glideins to use any GPU allocated to it, with jobs using the standard HTCondor GPU syntax to request and use a GPU. This mechanism allows us to seamlessly integrate our local GPU cluster with remote non-Grid GPU clusters, including specially allocated resources at XSEDE supercomputers.

  3. An observatory control system for the University of Hawai'i 2.2m Telescope

    NASA Astrophysics Data System (ADS)

    McKay, Luke; Erickson, Christopher; Mukensnable, Donn; Stearman, Anthony; Straight, Brad

    2016-07-01

    The University of Hawai'i 2.2m telescope at Maunakea has operated since 1970, and has had several controls upgrades to date. The newest system will operate as a distributed hierarchy of GNU/Linux central server, networked single-board computers, microcontrollers, and a modular motion control processor for the main axes. Rather than just a telescope control system, this new effort is towards a cohesive, modular, and robust whole observatory control system, with design goals of fully robotic unattended operation, high reliability, and ease of maintenance and upgrade.

  4. Reading comprehension and its underlying components in second-language learners: A meta-analysis of studies comparing first- and second-language learners.

    PubMed

    Melby-Lervåg, Monica; Lervåg, Arne

    2014-03-01

    We report a systematic meta-analytic review of studies comparing reading comprehension and its underlying components (language comprehension, decoding, and phonological awareness) in first- and second-language learners. The review included 82 studies, and 576 effect sizes were calculated for reading comprehension and underlying components. Key findings were that, compared to first-language learners, second-language learners display a medium-sized deficit in reading comprehension (pooled effect size d = -0.62), a large deficit in language comprehension (pooled effect size d = -1.12), but only small differences in phonological awareness (pooled effect size d = -0.08) and decoding (pooled effect size d = -0.12). A moderator analysis showed that characteristics related to the type of reading comprehension test reliably explained the variation in the differences in reading comprehension between first- and second-language learners. For language comprehension, studies of samples from low socioeconomic backgrounds and samples where only the first language was used at home generated the largest group differences in favor of first-language learners. Test characteristics and study origin reliably contributed to the variations between the studies of language comprehension. For decoding, Canadian studies showed group differences in favor of second-language learners, whereas the opposite was the case for U.S. studies. Regarding implications, unless specific decoding problems are detected, interventions that aim to ameliorate reading comprehension problems among second-language learners should focus on language comprehension skills.

  5. Survey Software Evaluation

    DTIC Science & Technology

    2009-01-01

    Oracle 9i, 10g  MySQL  MS SQL Server MS SQL Server Operating System Supported Windows 2003 Server  Windows 2000 Server (32 bit...WebStar (Mac OS X)  SunOne Internet Information Services (IIS) Database Server Supported MS SQL Server  MS SQL Server  Oracle 9i, 10g...challenges of Web-based surveys are: 1) identifying the best Commercial Off the Shelf (COTS) Web-based survey packages to serve the particular

  6. Measurement of Energy Performances for General-Structured Servers

    NASA Astrophysics Data System (ADS)

    Liu, Ren; Chen, Lili; Li, Pengcheng; Liu, Meng; Chen, Haihong

    2017-11-01

    Energy consumption of servers in data centers increases rapidly along with the wide application of Internet and connected devices. To improve the energy efficiency of servers, voluntary or mandatory energy efficiency programs for servers, including voluntary label program or mandatory energy performance standards have been adopted or being prepared in the US, EU and China. However, the energy performance of servers and testing methods of servers are not well defined. This paper presents matrices to measure the energy performances of general-structured servers. The impacts of various components of servers on their energy performances are also analyzed. Based on a set of normalized workload, the author proposes a standard method for testing energy efficiency of servers. Pilot tests are conducted to assess the energy performance testing methods of servers. The findings of the tests are discussed in the paper.

  7. Inference of chromosomal inversion dynamics from Pool-Seq data in natural and laboratory populations of Drosophila melanogaster.

    PubMed

    Kapun, Martin; van Schalkwyk, Hester; McAllister, Bryant; Flatt, Thomas; Schlötterer, Christian

    2014-04-01

    Sequencing of pools of individuals (Pool-Seq) represents a reliable and cost-effective approach for estimating genome-wide SNP and transposable element insertion frequencies. However, Pool-Seq does not provide direct information on haplotypes so that, for example, obtaining inversion frequencies has not been possible until now. Here, we have developed a new set of diagnostic marker SNPs for seven cosmopolitan inversions in Drosophila melanogaster that can be used to infer inversion frequencies from Pool-Seq data. We applied our novel marker set to Pool-Seq data from an experimental evolution study and from North American and Australian latitudinal clines. In the experimental evolution data, we find evidence that positive selection has driven the frequencies of In(3R)C and In(3R)Mo to increase over time. In the clinal data, we confirm the existence of frequency clines for In(2L)t, In(3L)P and In(3R)Payne in both North America and Australia and detect a previously unknown latitudinal cline for In(3R)Mo in North America. The inversion markers developed here provide a versatile and robust tool for characterizing inversion frequencies and their dynamics in Pool-Seq data from diverse D. melanogaster populations. © 2013 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.

  8. Inference of chromosomal inversion dynamics from Pool-Seq data in natural and laboratory populations of Drosophila melanogaster

    PubMed Central

    Kapun, Martin; van Schalkwyk, Hester; McAllister, Bryant; Flatt, Thomas; Schlötterer, Christian

    2014-01-01

    Sequencing of pools of individuals (Pool-Seq) represents a reliable and cost-effective approach for estimating genome-wide SNP and transposable element insertion frequencies. However, Pool-Seq does not provide direct information on haplotypes so that, for example, obtaining inversion frequencies has not been possible until now. Here, we have developed a new set of diagnostic marker SNPs for seven cosmopolitan inversions in Drosophila melanogaster that can be used to infer inversion frequencies from Pool-Seq data. We applied our novel marker set to Pool-Seq data from an experimental evolution study and from North American and Australian latitudinal clines. In the experimental evolution data, we find evidence that positive selection has driven the frequencies of In(3R)C and In(3R)Mo to increase over time. In the clinal data, we confirm the existence of frequency clines for In(2L)t, In(3L)P and In(3R)Payne in both North America and Australia and detect a previously unknown latitudinal cline for In(3R)Mo in North America. The inversion markers developed here provide a versatile and robust tool for characterizing inversion frequencies and their dynamics in Pool-Seq data from diverse D. melanogaster populations. PMID:24372777

  9. Empirical Validation of Pooled Whole Genome Population Re-Sequencing in Drosophila melanogaster

    PubMed Central

    Zhu, Yuan; Bergland, Alan O.; González, Josefa; Petrov, Dmitri A.

    2012-01-01

    The sequencing of pooled non-barcoded individuals is an inexpensive and efficient means of assessing genome-wide population allele frequencies, yet its accuracy has not been thoroughly tested. We assessed the accuracy of this approach on whole, complex eukaryotic genomes by resequencing pools of largely isogenic, individually sequenced Drosophila melanogaster strains. We called SNPs in the pooled data and estimated false positive and false negative rates using the SNPs called in individual strain as a reference. We also estimated allele frequency of the SNPs using “pooled” data and compared them with “true” frequencies taken from the estimates in the individual strains. We demonstrate that pooled sequencing provides a faithful estimate of population allele frequency with the error well approximated by binomial sampling, and is a reliable means of novel SNP discovery with low false positive rates. However, a sufficient number of strains should be used in the pooling because variation in the amount of DNA derived from individual strains is a substantial source of noise when the number of pooled strains is low. Our results and analysis confirm that pooled sequencing is a very powerful and cost-effective technique for assessing of patterns of sequence variation in populations on genome-wide scales, and is applicable to any dataset where sequencing individuals or individual cells is impossible, difficult, time consuming, or expensive. PMID:22848651

  10. PRIMO: An Interactive Homology Modeling Pipeline.

    PubMed

    Hatherley, Rowan; Brown, David K; Glenister, Michael; Tastan Bishop, Özlem

    2016-01-01

    The development of automated servers to predict the three-dimensional structure of proteins has seen much progress over the years. These servers make calculations simpler, but largely exclude users from the process. In this study, we present the PRotein Interactive MOdeling (PRIMO) pipeline for homology modeling of protein monomers. The pipeline eases the multi-step modeling process, and reduces the workload required by the user, while still allowing engagement from the user during every step. Default parameters are given for each step, which can either be modified or supplemented with additional external input. PRIMO has been designed for users of varying levels of experience with homology modeling. The pipeline incorporates a user-friendly interface that makes it easy to alter parameters used during modeling. During each stage of the modeling process, the site provides suggestions for novice users to improve the quality of their models. PRIMO provides functionality that allows users to also model ligands and ions in complex with their protein targets. Herein, we assess the accuracy of the fully automated capabilities of the server, including a comparative analysis of the available alignment programs, as well as of the refinement levels used during modeling. The tests presented here demonstrate the reliability of the PRIMO server when producing a large number of protein models. While PRIMO does focus on user involvement in the homology modeling process, the results indicate that in the presence of suitable templates, good quality models can be produced even without user intervention. This gives an idea of the base level accuracy of PRIMO, which users can improve upon by adjusting parameters in their modeling runs. The accuracy of PRIMO's automated scripts is being continuously evaluated by the CAMEO (Continuous Automated Model EvaluatiOn) project. The PRIMO site is free for non-commercial use and can be accessed at https://primo.rubi.ru.ac.za/.

  11. A SPDS Node to Support the Systematic Interpretation of Cosmic Ray Data

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The purpose of this project was to establish and maintain a Space Physics Data System (SPDS) node that supports the analysis and interpretation of current and future galactic cosmic ray (GCR) measurements by (1) providing on-line databases relevant to GCR propagation studies; (2) providing other on-line services, such as anonymous FTP access, mail list service and pointers to e-mail address books, to support the cosmic ray community; (3) providing a mechanism for those in the community who might wish to submit similar contributions for public access; (4) maintaining the node to assure that the databases remain current; and (5) investigating other possibilities, such as CD-ROM, for public dissemination of the data products. Shortly after the original grant to support these activities was established at Louisiana State University a detailed study of alternate choices for the node hardware was initiated. The chosen hardware was an Apple Workgroup Server 9150/120 consisting of a 120 MHz PowerPC 601 processor, 32 MB of memory, two I GB disks and one 2 GB disk. This hardware was ordered and installed and has been operating reliably ever since. A preliminary version of the database server was available during the first year effort and was used as part of the very successful SPDS demonstration during the Rome, Italy International Cosmic Ray Conference. For this server version we were able to establish the html and anonymous FTP server software, develop a Web page structure which can be easily modified to include new items, provide an on-line database of charge changing total cross sections, include the cross section prediction software of Silberberg & Tsao as well as Webber, Kish and Schrier for download access, and provide an on-line bibliography of the cross section measurement references by the Transport Collaboration. The preliminary version of this SPDS Cosmic Ray node was examined by members of the C&H SPDS committee and returned comments were used to refine the implementation.

  12. DOMe: A deduplication optimization method for the NewSQL database backups

    PubMed Central

    Wang, Longxiang; Zhu, Zhengdong; Zhang, Xingjun; Wang, Yinfeng

    2017-01-01

    Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication method is not optimized for the NewSQL server system and cannot take full advantage of hardware resources to optimize deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to optimize the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication optimization method (DOMe) for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication method. The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1) DOMe can reduce the duplicated NewSQL backup data. 2) DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3) DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index optimization method. PMID:29049307

  13. Availability of software services for a hospital information system.

    PubMed

    Sakamoto, N

    1998-03-01

    Hospital information systems (HISs) are becoming more important and covering more parts in daily hospital operations as order-entry systems become popular and electronic charts are introduced. Thus, HISs today need to be able to provide necessary services for hospital operations for a 24-h day, 365 days a year. The provision of services discussed here does not simply mean the availability of computers, in which all that matters is that the computer is functioning. It means the provision of necessary information for hospital operations by the computer software, and we will call it the availability of software services. HISs these days are mostly client-server systems. To increase availability of software services in these systems, it is not enough to just use system structures that are highly reliable in existing host-centred systems. Four main components which support availability of software services are network systems, client computers, server computers, and application software. In this paper, we suggest how to structure these four components to provide the minimum requested software services even if a part of the system stops to function. The network system should be double-protected in stratus using Asynchronous Transfer Mode (ATM) as its base network. Client computers should be fat clients with as much application logic as possible, and reference information which do not require frequent updates (master files, for example) should be replicated in clients. It would be best if all server computers could be double-protected. However, if that is physically impossible, one database file should be made accessible by several server computers. Still, at least the basic patients' information and the latest clinical records should be double-protected physically. Application software should be tested carefully before introduction. Different versions of the application software should always be kept and managed in case the new version has problems. If a hospital information system is designed and developed with these points in mind, it's availability of software services should increase greatly.

  14. PRIMO: An Interactive Homology Modeling Pipeline

    PubMed Central

    Glenister, Michael

    2016-01-01

    The development of automated servers to predict the three-dimensional structure of proteins has seen much progress over the years. These servers make calculations simpler, but largely exclude users from the process. In this study, we present the PRotein Interactive MOdeling (PRIMO) pipeline for homology modeling of protein monomers. The pipeline eases the multi-step modeling process, and reduces the workload required by the user, while still allowing engagement from the user during every step. Default parameters are given for each step, which can either be modified or supplemented with additional external input. PRIMO has been designed for users of varying levels of experience with homology modeling. The pipeline incorporates a user-friendly interface that makes it easy to alter parameters used during modeling. During each stage of the modeling process, the site provides suggestions for novice users to improve the quality of their models. PRIMO provides functionality that allows users to also model ligands and ions in complex with their protein targets. Herein, we assess the accuracy of the fully automated capabilities of the server, including a comparative analysis of the available alignment programs, as well as of the refinement levels used during modeling. The tests presented here demonstrate the reliability of the PRIMO server when producing a large number of protein models. While PRIMO does focus on user involvement in the homology modeling process, the results indicate that in the presence of suitable templates, good quality models can be produced even without user intervention. This gives an idea of the base level accuracy of PRIMO, which users can improve upon by adjusting parameters in their modeling runs. The accuracy of PRIMO’s automated scripts is being continuously evaluated by the CAMEO (Continuous Automated Model EvaluatiOn) project. The PRIMO site is free for non-commercial use and can be accessed at https://primo.rubi.ru.ac.za/. PMID:27855192

  15. THttpServer class in ROOT

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, Joern; Linev, Sergey

    2015-12-01

    The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.

  16. Validity and Reliability Study of the Self-Efficacy Scale in Rendering Piano Education to Children of 6-12 Years

    ERIC Educational Resources Information Center

    Ekinci, Hatice

    2014-01-01

    This study was conducted in order to develop a valid and reliable scale that can be used in measuring self-efficacy of candidate music teachers in rendering piano education to children of 6-12 years. To this end, a pool of 51 items was created by using the literature, and taking the opinions of piano professors and piano instructors working with…

  17. The Development of Will Perception Scale and Practice in a Psycho-Education Program with Its Validity and Reliability

    ERIC Educational Resources Information Center

    Yener, Özen

    2014-01-01

    In this research, we aim to develop a 5-point likert scale and use it in an experimental application by performing its validity and reliability in order to measure the will perception of teenagers and adults. With this aim, firstly the items have been taken either in the same or changed way from various scales and an item pool including 61 items…

  18. Generic Divide and Conquer Internet-Based Computing

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Radenski, Atanas

    2003-01-01

    The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end Internet nodes. Our project is focused on a generic divide and conquer paradigm and on mobile applications of this paradigm that can operate on a loose and ever changing pool of lower-end Internet nodes.

  19. Automated Gene Ontology annotation for anonymous sequence data.

    PubMed

    Hennig, Steffen; Groth, Detlef; Lehrach, Hans

    2003-07-01

    Gene Ontology (GO) is the most widely accepted attempt to construct a unified and structured vocabulary for the description of genes and their products in any organism. Annotation by GO terms is performed in most of the current genome projects, which besides generality has the advantage of being very convenient for computer based classification methods. However, direct use of GO in small sequencing projects is not easy, especially for species not commonly represented in public databases. We present a software package (GOblet), which performs annotation based on GO terms for anonymous cDNA or protein sequences. It uses the species independent GO structure and vocabulary together with a series of protein databases collected from various sites, to perform a detailed GO annotation by sequence similarity searches. The sensitivity and the reference protein sets can be selected by the user. GOblet runs automatically and is available as a public service on our web server. The paper also addresses the reliability of automated GO annotations by using a reference set of more than 6000 human proteins. The GOblet server is accessible at http://goblet.molgen.mpg.de.

  20. Testnodes: a Lightweight Node-Testing Infrastructure

    NASA Astrophysics Data System (ADS)

    Fay, R.; Bland, J.

    2014-06-01

    A key aspect of ensuring optimum cluster reliability and productivity lies in keeping worker nodes in a healthy state. Testnodes is a lightweight node testing solution developed at Liverpool. While Nagios has been used locally for general monitoring of hosts and services, Testnodes is optimised to answer one question: is there any reason this node should not be accepting jobs? This tight focus enables Testnodes to inspect nodes frequently with minimal impact and provide a comprehensive and easily extended check with each inspection. On the server side, Testnodes, implemented in python, interoperates with the Torque batch server to control the nodes production status. Testnodes remotely and in parallel executes client-side test scripts and processes the return codes and output, adjusting the node's online/offline status accordingly to preserve the integrity of the overall batch system. Testnodes reports via log, email and Nagios, allowing a quick overview of node status to be reviewed and specific node issues to be identified and resolved quickly. This presentation will cover testnodes design and implementation, together with the results of its use in production at Liverpool, and future development plans.

  1. A Next-Generation Apparatus for Lithium Optical Lattice Experiments

    NASA Astrophysics Data System (ADS)

    Keshet, Aviv

    Quantum simulation is emerging as an ambitious and active subfield of atomic physics. This thesis describes progress towards the goal of simulating condensed matter systems, in particular the physics of the Fermi-Hubbard model, using ultracold Lithium atoms in an optical lattice. A major goal of the quantum simulation program is to observe phase transitions of the Hubbard model, into Neal antiferromagnetic phases and d-wave superfluid phases. Phase transitions are generally accompanied by a change in an underlying correlation in a physical system. Such correlations may be most amenable to probing by looking at fluctuations in the system. Experimental techniques for probing density and magnetization fluctuations in a variety of atomic Fermi systems are developed. The suppression of density fluctuations (or atom "shot noise") in an ideal degenerate Fermi gas is observed by absorption imaging of time-of-flight expanded clouds. In-trap measurements of density and magnetization fluctuations are not easy to probe with absorption imaging, due to their extremely high attenuation. A method to probe these fluctuations based on speckle patterns, caused by fluctuations in the index of refraction for a detuned illumination beam, is developed and applied first to weakly interacting and then to strongly interacting in-trap gases. Fluctuation probes such as these will be a crucial tool in future quantum simulation of condensed matter systems. The quantum simulation experiments that we want to perform require a complex sequence of precisely timed computer controlled events. A distributed GUI-based control system designed with such experiments in mind, The Cicero Word Generator, is described. The system makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature allows this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using an FPGA-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100ns achieved over effectively arbitrary sequence lengths. Experimental set-ups for producing, manipulating, and probing ultracold atomic gases can be quite complicated. To move forward with a quantum simulation program, it is necessary to have an apparatus that operates with a reliability that is not easily achieved in the face of this complexity. The design of a new apparatus is discussed. This Sodium-Lithium ultracold gas production machine has been engineered to incorporate as much experimental experience as possible to enhance its reliability. Particular attention has been paid to maximizing optical access and the utilization of this optical access, controlling the ambient temperature of the experiment, achieving a high vacuum, and simplifying subsystems where possible. The apparatus is now on the verge of producing degenerate gases, and should serve as a stable platform on which to perform future lattice quantum simulation experiments. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  2. CIS3/398: Implementation of a Web-Based Electronic Patient Record for Transplant Recipients

    PubMed Central

    Fritsche, L; Lindemann, G; Schroeter, K; Schlaefer, A; Neumayer, H-H

    1999-01-01

    Introduction While the "Electronic patient record" (EPR) is a frequently quoted term in many areas of healthcare, only few working EPR-systems are available so far. To justify their use, EPRs must be able to store and display all kinds of medical information in a reliable, secure, time-saving, user-friendly way at an affordable price. Fields with patients who are attended to by a large number of medical specialists over a prolonged period of time are best suited to demonstrate the potential benefits of an EPR. The aim of our project was to investigate the feasibility of an EPR based solely on "of-the-shelf"-software and Internet-technology in the field of organ transplantation. Methods The EPR-system consists of three main elements: Data-storage facilities, a Web-server and a user-interface. Data are stored either in a relational database (Sybase Adaptive 11.5, Sybase Inc., CA) or in case of pictures (JPEG) and files in application formats (e. g. Word-Documents) on a Windows NT 4.0 Server (Microsoft Corp., WA). The entire communication of all data is handled by a Web-server (IIS 4.0, Microsoft) with an Active Server Pages extension. The database is accessed by ActiveX Data Objects via the ODBC-interface. The only software required on the user's computer is the Internet Explorer 4.01 (Microsoft), during the first use of the EPR, the ActiveX HTML Layout Control is automatically added. The user can access the EPR via Local or Wide Area Network or by dial-up connection. If the EPR is accessed from outside the firewall, all communication is encrypted (SSL 3.0, Netscape Comm. Corp., CA).The speed of the EPR-system was tested with 50 repeated measurements of the duration of two key-functions: 1) Display of all lab results for a given day and patient and 2) automatic composition of a letter containing diagnoses, medication, notes and lab results. For the test a 233 MHz Pentium II Processor with 10 Mbit/s Ethernet connection (ping-time below 10 ms) over 2 hubs to the server (400 MHz Pentium II, 256 MB RAM) was used. Results So far the EPR-system has been running for eight consecutive months and contains complete records of 673 transplant recipients with an average follow-up of 9.9 (SD :4.9) years and a total of 1.1 million lab values. Instruction to enable new users to perform basic operations took less than two hours in all cases. The average duration of laboratory access was 0.9 (SD:0.5) seconds, the automatic composition of a letter took 6.1 (SD:2.4) seconds. Apart from the database and Windows NT, all other components are available for free. The development of the EPR-system required less than two person-years. Conclusion Implementation of an Electronic patient record that meets the requirements of comprehensiveness, reliability, security, speed, user-friendliness and affordability using a combination of "of-the-shelf" software-products can be feasible, if the current state-of-the-art internet technology is applied.

  3. A mobile monitoring system of blood pressure for underserved in China by information and communication technology service.

    PubMed

    Jiang, Jiehui; Yan, Zhuangzhi; Kandachar, Prabhu; Freudenthal, Adinda

    2010-05-01

    High blood pressure (BP, hypertension) is a leading chronic condition in China and has become the main risk factor for many high-risk diseases, such as heart attacks. However, the platform for chronic disease measurement and management is still lacking, especially for underserved Chinese. To achieve the early diagnosis of hypertension, one BP monitoring system has been designed. The proposed design consists of three main parts: user domain, server domain, and channel domain. All three units and their materialization, validation tests on reliability, and usability are described in this paper, and the conclusion is that the current design concept is feasible and the system can be developed toward sufficient reliability and affordability with further optimization. This idea might also be extended into one platform for other physiological signals, such as blood sugar and ECG.

  4. Characteristics and Energy Use of Volume Servers in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuchs, H.; Shehabi, A.; Ganeshalingam, M.

    Servers’ field energy use remains poorly understood, given heterogeneous computing loads, configurable hardware and software, and operation over a wide range of management practices. This paper explores various characteristics of 1- and 2-socket volume servers that affect energy consumption, and quantifies the difference in power demand between higher-performing SPEC and ENERGY STAR servers and our best understanding of a typical server operating today. We first establish general characteristics of the U.S. installed base of volume servers from existing IDC data and the literature, before presenting information on server hardware configurations from data collection events at a major online retail website.more » We then compare cumulative distribution functions of server idle power across three separate datasets and explain the differences between them via examination of the hardware characteristics to which power draw is most sensitive. We find that idle server power demand is significantly higher than ENERGY STAR benchmarks and the industry-released energy use documented in SPEC, and that SPEC server configurations—and likely the associated power-scaling trends—are atypical of volume servers. Next, we examine recent trends in server power draw among high-performing servers across their full load range to consider how representative these trends are of all volume servers before inputting weighted average idle power load values into a recently published model of national server energy use. Finally, we present results from two surveys of IT managers (n=216) and IT vendors (n=178) that illustrate the prevalence of more-efficient equipment and operational practices in server rooms and closets; these findings highlight opportunities to improve the energy efficiency of the U.S. server stock.« less

  5. Mobile Assisted Security in Wireless Sensor Networks

    DTIC Science & Technology

    2015-08-03

    server from Google’s DNS, Chromecast and the content server does the 3-way TCP Handshake which is followed by Client Hello and Server Hello TLS messages...utilized TLS v1.2, except NTP servers and google’s DNS server. In the TLS v1.2, after handshake, client and server sends Client Hello and Server Hello ...Messages in order. In Client Hello messages, client offers a list of Cipher Suites that it supports. Each Cipher Suite defines the key exchange algorithm

  6. Rapid qualification of CSP assemblies by increase of ramp rates and cycling temperature ranges

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.; Kim, N.; Rose, D.; Hunter, B.; Devitt, K.; Long, T.

    2001-01-01

    Team members representing government agencies and private companies have joined together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects.

  7. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, X; Liu, L; Xing, L

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less

  8. Pool-site fuel inspection and examination techniques applied by the Kraftwerk Union Aktiengesellschaft Fuel Service. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knaab, H.; Knecht, K.

    The need for pool-site inspection and examination of fuel assemblies was recognized by Kraftwerk Union Aktiengesellschaft with the commissioning of the first nuclear power stations. A wet sipping method has demonstrated high reliability in detection of leaking fuel assemblies. The visual inspection system is a versatile tool. It can be supplemented by attaching devices for oxide thickness measurement or surface replication. Repair of leaking pressurized water reactor fuel assemblies has improved fuel utilization. Applied methods and typical results are described.

  9. OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software

    NASA Astrophysics Data System (ADS)

    Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.

    2006-12-01

    OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for a SOAP interface, the new server includes support for the THREDDS cataloging protocol. THREDDS is tightly integrated into the Front-End of Server4. The Server4 Front-End can make full use of the advanced THREDDS features such as attribute specification and inheritance, custom catalogs which segue into automatically generated catalogs as well as providing a default behavior which requires almost no catalog configuration.

  10. An extensible and lightweight architecture for adaptive server applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorton, Ian; Liu, Yan; Trivedi, Nihar

    2008-07-10

    Server applications augmented with behavioral adaptation logic can react to environmental changes, creating self-managing server applications with improved quality of service at runtime. However, developing adaptive server applications is challenging due to the complexity of the underlying server technologies and highly dynamic application environments. This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications. ASF provides a clear separation between the implementation of adaptive behavior and the business logic of the server application. This means a server application can be extended with programmable adaptive features through the definitionmore » and implementation of control components defined in ASF. Furthermore, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the erformance gains possible by adaptive behavior and the low overhead introduced by ASF.« less

  11. A reliable, low-cost picture archiving and communications system for small and medium veterinary practices built using open-source technology.

    PubMed

    Iotti, Bryan; Valazza, Alberto

    2014-10-01

    Picture Archiving and Communications Systems (PACS) are the most needed system in a modern hospital. As an integral part of the Digital Imaging and Communications in Medicine (DICOM) standard, they are charged with the responsibility for secure storage and accessibility of the diagnostic imaging data. These machines need to offer high performance, stability, and security while proving reliable and ergonomic in the day-to-day and long-term storage and retrieval of the data they safeguard. This paper reports the experience of the authors in developing and installing a compact and low-cost solution based on open-source technologies in the Veterinary Teaching Hospital for the University of Torino, Italy, during the course of the summer of 2012. The PACS server was built on low-cost x86-based hardware and uses an open source operating system derived from Oracle OpenSolaris (Oracle Corporation, Redwood City, CA, USA) to host the DCM4CHEE PACS DICOM server (DCM4CHEE, http://www.dcm4che.org ). This solution features very high data security and an ergonomic interface to provide easy access to a large amount of imaging data. The system has been in active use for almost 2 years now and has proven to be a scalable, cost-effective solution for practices ranging from small to very large, where the use of different hardware combinations allows scaling to the different deployments, while the use of paravirtualization allows increased security and easy migrations and upgrades.

  12. Thirty Meter Telescope (TMT) Narrow Field Infrared Adaptive Optics System (NFIRAOS) real-time controller preliminary architecture

    NASA Astrophysics Data System (ADS)

    Kerley, Dan; Smith, Malcolm; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi

    2016-08-01

    The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR).

  13. Evidence that transcranial direct current stimulation (tDCS) generates little-to-no reliable neurophysiologic effect beyond MEP amplitude modulation in healthy human subjects: A systematic review.

    PubMed

    Horvath, Jared Cooney; Forte, Jason D; Carter, Olivia

    2015-01-01

    Transcranial direct current stimulation (tDCS) is a form of neuromodulation that is increasingly being utilized to examine and modify a number of cognitive and behavioral measures. The theoretical mechanisms by which tDCS generates these changes are predicated upon a rather large neurophysiological literature. However, a robust systematic review of this neurophysiological data has not yet been undertaken. tDCS data in healthy adults (18-50) from every neurophysiological outcome measure reported by at least two different research groups in the literature was collected. When possible, data was pooled and quantitatively analyzed to assess significance. When pooling was not possible, data was qualitatively compared to assess reliability. Of the 30 neurophysiological outcome measures reported by at least two different research groups, tDCS was found to have a reliable effect on only one: MEP amplitude. Interestingly, the magnitude of this effect has been significantly decreasing over the last 14 years. Our systematic review does not support the idea that tDCS has a reliable neurophysiological effect beyond MEP amplitude modulation - though important limitations of this review (and conclusion) are discussed. This work raises questions concerning the mechanistic foundations and general efficacy of this device - the implications of which extend to the steadily increasing tDCS psychological literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Energy Efficiency in Small Server Rooms: Field Surveys and Findings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Iris; Greenberg, Steve; Mahdavi, Roozbeh

    Fifty-seven percent of US servers are housed in server closets, server rooms, and localized data centers, in what are commonly referred to as small server rooms, which comprise 99percent of all server spaces in the US. While many mid-tier and enterprise-class data centers are owned by large corporations that consider energy efficiency a goal to minimize business operating costs, small server rooms typically are not similarly motivated. They are characterized by decentralized ownership and management and come in many configurations, which creates a unique set of efficiency challenges. To develop energy efficiency strategies for these spaces, we surveyed 30 smallmore » server rooms across eight institutions, and selected four of them for detailed assessments. The four rooms had Power Usage Effectiveness (PUE) values ranging from 1.5 to 2.1. Energy saving opportunities ranged from no- to low-cost measures such as raising cooling set points and better airflow management, to more involved but cost-effective measures including server consolidation and virtualization, and dedicated cooling with economizers. We found that inefficiencies mainly resulted from organizational rather than technical issues. Because of the inherent space and resource limitations, the most effective measure is to operate servers through energy-efficient cloud-based services or well-managed larger data centers, rather than server rooms. Backup power requirement, and IT and cooling efficiency should be evaluated to minimize energy waste in the server space. Utility programs are instrumental in raising awareness and spreading technical knowledge on server operation, and the implementation of energy efficiency measures in small server rooms.« less

  15. Cybersecurity, massive data processing, community interaction, and other developments at WWW-based computational X-ray Server

    NASA Astrophysics Data System (ADS)

    Stepanov, Sergey

    2013-03-01

    X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.

  16. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  17. APOLLO: a quality assessment service for single and multiple protein models.

    PubMed

    Wang, Zheng; Eickholt, Jesse; Cheng, Jianlin

    2011-06-15

    We built a web server named APOLLO, which can evaluate the absolute global and local qualities of a single protein model using machine learning methods or the global and local qualities of a pool of models using a pair-wise comparison approach. Based on our evaluations on 107 CASP9 (Critical Assessment of Techniques for Protein Structure Prediction) targets, the predicted quality scores generated from our machine learning and pair-wise methods have an average per-target correlation of 0.671 and 0.917, respectively, with the true model quality scores. Based on our test on 92 CASP9 targets, our predicted absolute local qualities have an average difference of 2.60 Å with the actual distances to native structure. http://sysbio.rnet.missouri.edu/apollo/. Single and pair-wise global quality assessment software is also available at the site.

  18. Statistical model specification and power: recommendations on the use of test-qualified pooling in analysis of experimental data

    PubMed Central

    Colegrave, Nick

    2017-01-01

    A common approach to the analysis of experimental data across much of the biological sciences is test-qualified pooling. Here non-significant terms are dropped from a statistical model, effectively pooling the variation associated with each removed term with the error term used to test hypotheses (or estimate effect sizes). This pooling is only carried out if statistical testing on the basis of applying that data to a previous more complicated model provides motivation for this model simplification; hence the pooling is test-qualified. In pooling, the researcher increases the degrees of freedom of the error term with the aim of increasing statistical power to test their hypotheses of interest. Despite this approach being widely adopted and explicitly recommended by some of the most widely cited statistical textbooks aimed at biologists, here we argue that (except in highly specialized circumstances that we can identify) the hoped-for improvement in statistical power will be small or non-existent, and there is likely to be much reduced reliability of the statistical procedures through deviation of type I error rates from nominal levels. We thus call for greatly reduced use of test-qualified pooling across experimental biology, more careful justification of any use that continues, and a different philosophy for initial selection of statistical models in the light of this change in procedure. PMID:28330912

  19. Development and validation of oral health-related early childhood quality of life tool for North Indian preschool children.

    PubMed

    Mathur, Vijay Prakash; Dhillon, Jatinder Kaur; Logani, Ajay; Agarwal, Ramesh

    2014-01-01

    The purpose of this study was to develop a reliable instrument [Oral Health related Early Childhood Quality of Life (OH- ECQOL) scale] for measuring oral health related quality of life (OHrQoL) in preschool children in North Indian population. Four pediatric dentists evaluated a pool of 65 items from various QoL questionnaires to assess their relevance to Indian population. These items were discussed with eight independent pediatric dentists and two community dentists who were not a part of this study to assess relevance of these items to preschool age children based on their comprehensiveness and clarity. Based on their responses and feedback a modified pool of items was developed and administered to a convenience sample of 20 parents who rated these items according to their relevance. The test retest reliability was evaluated on another sample of 20 parents of 2-5 year old children. The final questionnaire comprised of 16 items (12 child and 4 family). This was administered to 300 parents of 24-71 months old children divided on the basis of early childhood caries to assess its reliability and validity. OH-ECQOL scores were significantly associated with parental ratings of their child's general and oral health, and the presence of dental disease in the child. Cronbach's alpha was 0.862, and the ICC for test-retest reliability was 0.94. The OH-ECQOL proved reliable and valid tool for assessing the impact of oral disorders on the quality of life of preschool children in Northern India.

  20. Thermal cycling test results of CSP and RF assemblies

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.; Nelson, G.; Cooper, M.; Lam, D.; Strudler, S.; Umdekar, A.; Selk, K.; Bjorndahl, B.; Duprey, R.

    2000-01-01

    A JPL-led chip scale package (CSP) Consortium of enterprises, composed of representing agencies and private companies, recently joined together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects.

  1. Design and implementation of streaming media server cluster based on FFMpeg.

    PubMed

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.

  2. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg

    PubMed Central

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187

  3. Multicast for savings in cache-based video distribution

    NASA Astrophysics Data System (ADS)

    Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf

    1999-12-01

    Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.

  4. Magnetic Thin Films for Perpendicular Magnetic Recording Systems

    NASA Astrophysics Data System (ADS)

    Sugiyama, Atsushi; Hachisu, Takuma; Osaka, Tetsuya

    In the advanced information society of today, information storage technology, which helps to store a mass of electronic data and offers high-speed random access to the data, is indispensable. Against this background, hard disk drives (HDD), which are magnetic recording devices, have gained in importance because of their advantages in capacity, speed, reliability, and production cost. These days, the uses of HDD extend not only to personal computers and network servers but also to consumer electronics products such as personal video recorders, portable music players, car navigation systems, video games, video cameras, and personal digital assistances.

  5. Field test of classical symmetric encryption with continuous variables quantum key distribution.

    PubMed

    Jouguet, Paul; Kunz-Jacques, Sébastien; Debuisschert, Thierry; Fossier, Simon; Diamanti, Eleni; Alléaume, Romain; Tualle-Brouri, Rosa; Grangier, Philippe; Leverrier, Anthony; Pache, Philippe; Painchault, Philippe

    2012-06-18

    We report on the design and performance of a point-to-point classical symmetric encryption link with fast key renewal provided by a Continuous Variable Quantum Key Distribution (CVQKD) system. Our system was operational and able to encrypt point-to-point communications during more than six months, from the end of July 2010 until the beginning of February 2011. This field test was the first demonstration of the reliability of a CVQKD system over a long period of time in a server room environment. This strengthens the potential of CVQKD for information technology security infrastructure deployments.

  6. Remote multi-function fire alarm system based on internet of things

    NASA Astrophysics Data System (ADS)

    Wang, Lihui; Zhao, Shuai; Huang, Jianqing; Ji, Jianyu

    2018-05-01

    This project uses MCU STC15W408AS (stable, energy saving, high speed), temperature sensor DS18B20 (cheap, high efficiency, stable), MQ2 resistance type semiconductor smog sensor (high stability, fast response and economy) and NRF24L01 wireless transmitting and receiving module (energy saving, small volume, reliable) as the main body to achieve concentration temperature data presentation, intelligent voice alarming and short distance wireless transmission. The whole system is safe, reliable, cheap, quick reaction and good performance. This project uses the MCU STM32F103RCT6 as the main control chip, and use WIFI module ESP8266, wireless module NRF24L01 to make the gateway. Users can remotely check and control the related devices in real-time on smartphones or computers. We can also realize the functions of intelligent fire monitoring, remote fire extinguishing, cloud data storage through the third party server Big IOT.

  7. Data acquisition and PV module power production in upgraded TEP/AzRISE solar test yard

    NASA Astrophysics Data System (ADS)

    Bennett, Whit E.; Fishgold, Asher D.; Lai, Teh; Potter, Barrett G.; Simmons-Potter, Kelly

    2017-08-01

    The Tucson Electric Power (TEP)/University of Arizona AzRISE (Arizona Research Institute for Solar Energy) solar test yard is continuing efforts to improve standardization and data acquisition reliability throughout the facility. Data reliability is ensured through temperature-insensitive data acquisition devices with battery backups in the upgraded test yard. Software improvements allow for real-time analysis of collected data, while uploading to a web server. Sample data illustrates high fidelity monitoring of the burn-in period of a polycrystalline silicon photovoltaic module test string with no data failures over 365 days of data collection. In addition to improved DAQ systems, precision temperature monitoring has been implemented so that PV module backside temperatures are routinely obtained. Weather station data acquired at the test yard provides local ambient temperature, humidity, wind speed, and irradiance measurements that have been utilized to enable characterization of PV module performance over an extended test period

  8. Security policies and trust in ubiquitous computing.

    PubMed

    Joshi, Anupam; Finin, Tim; Kagal, Lalana; Parker, Jim; Patwardhan, Anand

    2008-10-28

    Ubiquitous environments comprise resource-constrained mobile and wearable devices and computational elements embedded in everyday artefacts. These are connected to each other using both infrastructure-based as well as short-range ad hoc networks. Limited Internet connectivity limits the use of conventional security mechanisms such as public key infrastructures and other forms of server-centric authentication. Under these circumstances, peer-to-peer interactions are well suited for not just information interchange, but also managing security and privacy. However, practical solutions for protecting mobile devices, preserving privacy, evaluating trust and determining the reliability and accuracy of peer-provided data in such interactions are still in their infancy. Our research is directed towards providing stronger assurances of the reliability and trustworthiness of information and services, and the use of declarative policy-driven approaches to handle the open and dynamic nature of such systems. This paper provides an overview of some of the challenges and issues, and points out directions for progress.

  9. Avionics Reliability, Its Techniques and Related Disciplines.

    DTIC Science & Technology

    1979-10-01

    USAF F-16s. C.J.P.Haynes, UK You said that if one of the 5 nations consumes more than its fair share of the combined spares pool then the item manager ... MANAGEMENT OF THE AVIONIC SYSTEM OF A MILITARY STRIKE AIRCRAFT by A.P.White and J.D.Pavier 29 SESSION IV - SOFTWARE RELIABILITY’ INTRODUCTION TO...ASPECT by D.J.Harris 37 SESSION V - AVIONICS LOGISTICS SUPPORT ASPECTS INTEGRATED LOGISTICS SUPPORT ADDS ANOTHER DIMENSION TO MATRIX MANAGEMENT by

  10. Single cell transcriptomic analysis of prostate cancer cells.

    PubMed

    Welty, Christopher J; Coleman, Ilsa; Coleman, Roger; Lakely, Bryce; Xia, Jing; Chen, Shu; Gulati, Roman; Larson, Sandy R; Lange, Paul H; Montgomery, Bruce; Nelson, Peter S; Vessella, Robert L; Morrissey, Colm

    2013-02-16

    The ability to interrogate circulating tumor cells (CTC) and disseminated tumor cells (DTC) is restricted by the small number detected and isolated (typically <10). To determine if a commercially available technology could provide a transcriptomic profile of a single prostate cancer (PCa) cell, we clonally selected and cultured a single passage of cell cycle synchronized C4-2B PCa cells. Ten sets of single, 5-, or 10-cells were isolated using a micromanipulator under direct visualization with an inverted microscope. Additionally, two groups of 10 individual DTC, each isolated from bone marrow of 2 patients with metastatic PCa were obtained. RNA was amplified using the WT-Ovation™ One-Direct Amplification System. The amplified material was hybridized on a 44K Whole Human Gene Expression Microarray. A high stringency threshold, a mean Alexa Fluor® 3 signal intensity above 300, was used for gene detection. Relative expression levels were validated for select genes using real-time PCR (RT-qPCR). Using this approach, 22,410, 20,423, and 17,009 probes were positive on the arrays from 10-cell pools, 5-cell pools, and single-cells, respectively. The sensitivity and specificity of gene detection on the single-cell analyses were 0.739 and 0.972 respectively when compared to 10-cell pools, and 0.814 and 0.979 respectively when compared to 5-cell pools, demonstrating a low false positive rate. Among 10,000 randomly selected pairs of genes, the Pearson correlation coefficient was 0.875 between the single-cell and 5-cell pools and 0.783 between the single-cell and 10-cell pools. As expected, abundant transcripts in the 5- and 10-cell samples were detected by RT-qPCR in the single-cell isolates, while lower abundance messages were not. Using the same stringency, 16,039 probes were positive on the patient single-cell arrays. Cluster analysis showed that all 10 DTC grouped together within each patient. A transcriptomic profile can be reliably obtained from a single cell using commercially available technology. As expected, fewer amplified genes are detected from a single-cell sample than from pooled-cell samples, however this method can be used to reliably obtain a transcriptomic profile from DTC isolated from the bone marrow of patients with PCa.

  11. San Mateo County's Server Information Program (S.I.P.): A Community-Based Alcohol Server Training Program.

    ERIC Educational Resources Information Center

    de Miranda, John

    The field of alcohol server awareness and training has grown dramatically in the past several years and the idea of training servers to reduce alcohol problems has become a central fixture in the current alcohol policy debate. The San Mateo County, California Server Information Program (SIP) is a community-based prevention strategy designed to…

  12. The Road to Pathways

    ERIC Educational Resources Information Center

    Cooper, Sandi

    2013-01-01

    Frequently drawn from elite corporate backgrounds and, for public institutions, from pools of politically connected people, trustees usually have to be educated about shared governance. Corporate or political ties are scarcely reliable indices of the wisdom necessary to oversee institutions of higher learning, and some trustees, sadly, prove…

  13. Using beta binomials to estimate classification uncertainty for ensemble models.

    PubMed

    Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin

    2014-01-01

    Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.

  14. Ultrasonography in diagnosing clinically occult groin hernia: systematic review and meta-analysis.

    PubMed

    Kwee, Robert M; Kwee, Thomas C

    2018-05-14

    To provide an updated systematic review on the performance of ultrasonography (US) in diagnosing clinically occult groin hernia. A systematic search was performed in MEDLINE and Embase. Methodological quality of included studies was assessed. Accuracy data of US in detecting clinically occult groin hernia were extracted. Positive predictive value (PPV) was pooled with a random effects model. For studies investigating the performance of US in hernia type classification (inguinal vs femoral), correctly classified proportion was assessed. Sixteen studies were included. In the two studies without verification bias, sensitivities were 29.4% [95% confidence interval (CI), 15.1-47.5%] and 90.9% (95% CI, 70.8-98.9%); specificities were 90.0% (95% CI, 80.5-95.9%) and 90.6% (95% CI, 83.0-95.6%). Verification bias or a variation of it (i.e. study limited to only subjects with definitive proof of disease status) was present in all other studies. Sensitivity, specificity, and negative predictive value (NPV) were not pooled. PPV ranged from 58.8 to 100%. Pooled PPV, based on data from ten studies with low risk of bias and no applicability concerns with respect to patient selection, was 85.6% (95% CI, 76.5-92.7%). Proportion of correctly classified hernias, based on data from four studies, ranged between 94.4% and 99.1%. Sensitivity, specificity and NPV of US in detecting clinically occult groin hernia cannot reliably be determined based on current evidence. Further studies are necessary. Accuracy may strongly depend on the examiner's skills. PPV is high. Inguinal and femoral hernias can reliably be differentiated by US. • Sensitivity, specificity and NPV of ultrasound in detecting clinically occult groin hernia cannot reliably be determined based on current evidence. • Accuracy may strongly depend on the examiner's skills. • PPV of US in detection of clinically occult groin hernia is high [pooled PPV of 85.6% (95% confidence interval, 76.5-92.7%)]. • US has very high performance in correctly differentiating between clinically occult inguinal and femoral hernia (correctness of 94.4- 99.1%).

  15. Analysis of practical backoff protocols for contention resolution with multiple servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, L.A.; MacKenzie, P.D.

    Backoff protocols are probably the most widely used protocols for contention resolution in multiple access channels. In this paper, we analyze the stochastic behavior of backoff protocols for contention resolution among a set of clients and servers, each server being a multiple access channel that deals with contention like an Ethernet channel. We use the standard model in which each client generates requests for a given server according to a Bemoulli distribution with a specified mean. The client-server request rate of a system is the maximum over all client-server pairs (i, j) of the sum of all request rates associatedmore » with either client i or server j. Our main result is that any superlinear polynomial backoff protocol is stable for any multiple-server system with a sub-unit client-server request rate. We confirm the practical relevance of our result by demonstrating experimentally that the average waiting time of requests is very small when such a system is run with reasonably few clients and reasonably small request rates such as those that occur in actual ethernets. Our result is the first proof of stability for any backoff protocol for contention resolution with multiple servers. Our result is also the first proof that any weakly acknowledgment based protocol is stable for contention resolution with multiple servers and such high request rates. Two special cases of our result are of interest. Hastad, Leighton and Rogoff have shown that for a single-server system with a sub-unit client-server request rate any modified superlinear polynomial backoff protocol is stable. These modified backoff protocols are similar to standard backoff protocols but require more random bits to implement. The special case of our result in which there is only one server extends the result of Hastad, Leighton and Rogoff to standard (practical) backoff protocols. Finally, our result applies to dynamic routing in optical networks.« less

  16. Naver: a PC-cluster-based VR system

    NASA Astrophysics Data System (ADS)

    Park, ChangHoon; Ko, HeeDong; Kim, TaiYun

    2003-04-01

    In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.

  17. Design details of Intelligent Instruments for PLC-free Cryogenic measurements, control and data acquisition

    NASA Astrophysics Data System (ADS)

    Antony, Joby; Mathuria, D. S.; Chaudhary, Anup; Datta, T. S.; Maity, T.

    2017-02-01

    Cryogenic network for linear accelerator operations demand a large number of Cryogenic sensors, associated instruments and other control-instrumentation to measure, monitor and control different cryogenic parameters remotely. Here we describe an alternate approach of six types of newly designed integrated intelligent cryogenic instruments called device-servers which has the complete circuitry for various sensor-front-end analog instrumentation and the common digital back-end http-server built together, to make crateless PLC-free model of controls and data acquisition. These identified instruments each sensor-specific viz. LHe server, LN2 Server, Control output server, Pressure server, Vacuum server and Temperature server are completely deployed over LAN for the cryogenic operations of IUAC linac (Inter University Accelerator Centre linear Accelerator), New Delhi. This indigenous design gives certain salient features like global connectivity, low cost due to crateless model, easy signal processing due to integrated design, less cabling and device-interconnectivity etc.

  18. Twin-tailed fail-over for fileservers maintaining full performance in the presence of a failure

    DOEpatents

    Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.

    2008-02-12

    A method for maintaining full performance of a file system in the presence of a failure is provided. The file system having N storage devices, where N is an integer greater than zero and N primary file servers where each file server is operatively connected to a corresponding storage device for accessing files therein. The file system further having a secondary file server operatively connected to at least one of the N storage devices. The method including: switching the connection of one of the N storage devices to the secondary file server upon a failure of one of the N primary file servers; and switching the connections of one or more of the remaining storage devices to a primary file server other than the failed file server as necessary so as to prevent a loss in performance and to provide each storage device with an operating file server.

  19. Experimental parametric study of servers cooling management in data centers buildings

    NASA Astrophysics Data System (ADS)

    Nada, S. A.; Elfeky, K. E.; Attia, Ali M. A.; Alshaer, W. G.

    2017-06-01

    A parametric study of air flow and cooling management of data centers servers is experimentally conducted for different design conditions. A physical scale model of data center accommodating one rack of four servers was designed and constructed for testing purposes. Front and rear rack and server's temperatures distributions and supply/return heat indices (SHI/RHI) are used to evaluate data center thermal performance. Experiments were conducted to parametrically study the effects of perforated tiles opening ratio, servers power load variation and rack power density. The results showed that (1) perforated tile of 25% opening ratio provides the best results among the other opening ratios, (2) optimum benefit of cold air in servers cooling is obtained at uniformly power loading of servers (3) increasing power density decrease air re-circulation but increase air bypass and servers temperature. The present results are compared with previous experimental and CFD results and fair agreement was found.

  20. Articles of Data Confederation: DataONE, the KNB, and a multitude of Metacats- scaling interoperable data discovery and preservation from the lab to the Internet

    NASA Astrophysics Data System (ADS)

    Schildhauer, M.; Jones, M. B.; Jones, C. S.; Tao, J.

    2017-12-01

    The opportunities for synthesis science to advance understanding of the environment have never been greater. Challenges remain, however, with regards to preserving data in discoverable and re-usable formats, to inform new integrative analyses, and support reproducible science. In this talk I will describe one promising solution for data preservation, discovery, and re-use- the Knowledge Network for Biocomplexity, or KNB. The KNB (http://knb.ecoinformatics.org) has been providing a reliable data repository for ecological and environmental researchers for over 15 years. The KNB is a distributed, open-source, web-enabled data repository based upon a formal metadata standard, EML, that is endorsed by several major ecological institutions including the LTER Network and NCEAS. A KNB server, also called a "Metacat", can be setup on very modest hardware, typically within a few hours, requres no expensive or proprietary software, and only moderate systems administration expertise. A tiered architecture allows KNB servers (or "Metacats") to communicate with other KNB servers, to afford greater operational reliability, higher performance, and reductions in potental data loss. The KNB is a strong member of the DataONE "Data Observation Network for Earth" (http://dataone.org) system, that confederates over 35 significant earth science data repositories (and still growing) from around the world through an open and powerful API. DataONE provides for integrated search over member repository holdings that incorporate features based on W3C-compliant semantics through annotations with OWL/RDF vocabularies such as PROV and the Environment Ontology, ENVO. The KNB and DataONE frameworks have given rise to an Open Science software development community that is actively building tools based on software that scientists already use, such as MATLAB and R. These tools can be used to both contribute data to, and operate upon data within the KNB and DataONE systems. An active User Community within DataONE assists with prioritizing future features of the framework, and provides for peer-to-peer assistance through chat-rooms and email lists. The challenge of achieving long-term sustainable funding for both the KNB and DataONE are still being addressed, and may stimulate discussion towards the end of my talk, time permitting.

  1. Experience with Adaptive Security Policies.

    DTIC Science & Technology

    1998-03-01

    3.1 Introduction r: 3.2 Logical Groupings of audited permission checks 29 3.3 Auditing of system servers via microkernel snooping 31 3.4...performed by servers other than the microkernel . Since altering each server to audit events would complicate the integration of new servers, a...modification to the microkernel was implemented to allow the microkernel to audit the requests made of other servers. Both methods for enhancing audit

  2. Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology

    NASA Astrophysics Data System (ADS)

    Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; King, Todd; Hughes, John; Fung, Shing; Galkin, Ivan; Hapgood, Michael; Belehaki, Anna

    2015-04-01

    Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology European Union ESPAS, Japanese IUGONET and GFZ ISDC data server are developed for the ingestion, archiving and distributing of geo and space science domain data. Main parts of the data -managed by the mentioned data server- are related to near earth-space and geomagnetic field data. A smart mashup of the data server would allow a seamless browse and access to data and related context information. However the achievement of a high level of interoperability is a challenge because the data server are based on different data models and software frameworks. This paper is focused on the latest experiments and results for the mashup of the data server using the semantic Web approach. Besides the mashup of domain and terminological ontologies, especially the options to connect data managed by relational databases using D2R server and SPARQL technology will be addressed. A successful realization of the data server mashup will not only have a positive impact to the data users of the specific scientific domain but also to related projects, such as e.g. the development of a new interoperable version of NASA's Planetary Data System (PDS) or ICUS's World Data System alliance. ESPAS data server: https://www.espas-fp7.eu/portal/ IUGONET data server: http://search.iugonet.org/iugonet/ GFZ ISDC data server (semantic Web based prototype): http://rz-vm30.gfz-potsdam.de/drupal-7.9/ NASA PDS: http://pds.nasa.gov ICSU-WDS: https://www.icsu-wds.org

  3. Triple-server blind quantum computation using entanglement swapping

    NASA Astrophysics Data System (ADS)

    Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua

    2014-04-01

    Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.

  4. 48 CFR 1523.7002 - Waivers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ENVIRONMENTAL, CONSERVATION, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Energy-Efficient Computer Equipment 1523.7002 Waivers. (a) There are several types of computer equipment which technically fall under the... types of equipment: (1) LAN servers, including file servers; application servers; communication servers...

  5. 48 CFR 1523.7002 - Waivers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ENVIRONMENTAL, CONSERVATION, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Energy-Efficient Computer Equipment 1523.7002 Waivers. (a) There are several types of computer equipment which technically fall under the... types of equipment: (1) LAN servers, including file servers; application servers; communication servers...

  6. 48 CFR 1523.7002 - Waivers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ENVIRONMENTAL, CONSERVATION, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Energy-Efficient Computer Equipment 1523.7002 Waivers. (a) There are several types of computer equipment which technically fall under the... types of equipment: (1) LAN servers, including file servers; application servers; communication servers...

  7. Real-Time and Secure Wireless Health Monitoring

    PubMed Central

    Dağtaş, S.; Pekhteryev, G.; Şahinoğlu, Z.; Çam, H.; Challa, N.

    2008-01-01

    We present a framework for a wireless health monitoring system using wireless networks such as ZigBee. Vital signals are collected and processed using a 3-tiered architecture. The first stage is the mobile device carried on the body that runs a number of wired and wireless probes. This device is also designed to perform some basic processing such as the heart rate and fatal failure detection. At the second stage, further processing is performed by a local server using the raw data transmitted by the mobile device continuously. The raw data is also stored at this server. The processed data as well as the analysis results are then transmitted to the service provider center for diagnostic reviews as well as storage. The main advantages of the proposed framework are (1) the ability to detect signals wirelessly within a body sensor network (BSN), (2) low-power and reliable data transmission through ZigBee network nodes, (3) secure transmission of medical data over BSN, (4) efficient channel allocation for medical data transmission over wireless networks, and (5) optimized analysis of data using an adaptive architecture that maximizes the utility of processing and computational capacity at each platform. PMID:18497866

  8. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  9. KernPaeP - a web-based pediatric palliative documentation system for home care.

    PubMed

    Hartz, Tobias; Verst, Hendrik; Ueckert, Frank

    2009-01-01

    KernPaeP is a new web-based on- and offline documentation system, which has been developed for pediatric palliative care-teams supporting patient documentation and communication among health care professionals. It provides a reliable system making fast and secure home care documentation possible. KernPaeP is accessible online by registered users using any web-browser. Home care teams use an offline version of KernPaeP running on a netbook for patient documentation on site. Identifying and medical patient data are strictly separated and stored on two database servers. The system offers a stable, enhanced two-way algorithm for synchronization between the offline component and the central database servers. KernPaeP is implemented meeting highest security standards while still maintaining high usability. The web-based documentation system allows ubiquitous and immediate access to patient data. Sumptuous paper work is replaced by secure and comprehensive electronic documentation. KernPaeP helps saving time and improving the quality of documentation. Due to development in close cooperation with pediatric palliative professionals, KernPaeP fulfils the broad needs of home-care documentation. The technique of web-based online and offline documentation is in general applicable for arbitrary home care scenarios.

  10. Shuttle-Data-Tape XML Translator

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.

  11. Instantaneous network RTK in Orange County, California

    NASA Astrophysics Data System (ADS)

    Bock, Y.

    2003-04-01

    The Orange County Real Time GPS Network (OCRTN) is an upgrade of a sub-network of SCIGN sites in southern California to low latency (1-2 sec), high-rate (1 Hz) data streaming, analysis, and dissemination. The project is a collaborative effort of the California Spatial Reference Center (CSRC) and the Orange County Public Resource and Facilities Division, with partners from the geophysical community, local and state government, and the private sector. Currently, ten sites are streaming 1 Hz raw data (Ashtech binary MBEN format) by means of dedicated, point-to-point radio modems to a network hub that translates the asynchronous serial data to TCP/IP and onto a PC workstation residing on a local area network. Software residing on the PC allows multiple clients to access the raw data simultaneously though TCP/IP. One of the clients is a Geodetics RTD server that receives and archives (1) the raw 1 Hz network data, (2) estimates of instantaneous positions and zenith tropospheric delays for quality control and detection of ground motion, and (3) RINEX data to decimated to 30 seconds. Data recovery is typically 99-100%. The server also produces 1 Hz RTCM data (messages 18, 19, 3 and 22) that are available by means of TCP/IP to RTK clients with wireless Internet modems. Coverage is excellent throughout the county. The server supports standard RTK users and is compatible with existing GPS instrumentation. Typical latency is 1-2 s, with initialization times of several seconds to minutes OCRTN site spacing is 10-15 km. In addition, the server supports “smart clients” who can retrieve data from the closest n sites (typically 3) and obtain an instantaneous network RTK position with 1-2 s latency. This mode currently requires a PDA running the RTD client software, and a wireless card. Since there is no initialization and re-initialization required this approach is well suited to support high-precision (centimeter-level) dynamic applications such as intelligent transportation and aircraft landing. We will discuss the results of field tests of this system, indicating that instantaneous network RTK can be performed accurately and reliably. If an Internet connection is available we will present a real-time demonstration.

  12. Automatic provisioning, deployment and orchestration for load-balancing THREDDS instances

    NASA Astrophysics Data System (ADS)

    Cofino, A. S.; Fernández-Tejería, S.; Kershaw, P.; Cimadevilla, E.; Petri, R.; Pryor, M.; Stephens, A.; Herrera, S.

    2017-12-01

    THREDDS is a widely used web server to provide to different scientific communities with data access and discovery. Due to THREDDS's lack of horizontal scalability and automatic configuration management and deployment, this service usually deals with service downtimes and time consuming configuration tasks, mainly when an intensive use is done as is usual within the scientific community (e.g. climate). Instead of the typical installation and configuration of a single or multiple independent THREDDS servers, manually configured, this work presents an automatic provisioning, deployment and orchestration cluster of THREDDS servers. This solution it's based on Ansible playbooks, used to control automatically the deployment and configuration setup on a infrastructure and to manage the datasets available in THREDDS instances. The playbooks are based on modules (or roles) of different backends and frontends load-balancing setups and solutions. The frontend load-balancing system enables horizontal scalability by delegating requests to backend workers, consisting in a variable number of instances for the THREDDS server. This implementation allows to configure different infrastructure and deployment scenario setups, as more workers are easily added to the cluster by simply declaring them as Ansible variables and executing the playbooks, and also provides fault-tolerance and better reliability since if any of the workers fail another instance of the cluster can take over it. In order to test the solution proposed, two real scenarios are analyzed in this contribution: The JASMIN Group Workspaces at CEDA and the User Data Gateway (UDG) at the Data Climate Service from the University of Cantabria. On the one hand, the proposed configuration has provided CEDA with a higher level and more scalable Group Workspaces (GWS) service than the previous one based on Unix permissions, improving also the data discovery and data access experience. On the other hand, the UDG has improved its scalability by allowing requests to be distributed to the backend workers instead of being served by a unique THREDDS worker. As a conclusion the proposed configuration supposes a significant improvement with respect to configurations based on non-collaborative THREDDS' instances.

  13. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  14. An Optimization of the Basic School Military Occupational Skill Assignment Process

    DTIC Science & Technology

    2003-06-01

    Corps Intranet (NMCI)23 supports it. We evaluated the use of Microsoft’s SQL Server, but dismissed this after learning that TBS did not possess a SQL ...Server license or a qualified SQL Server administrator.24 SQL Server would have provided for additional security measures not available in MS...administrator. Although not has powerful as SQL Server, MS Access can handle the multi-user environment necessary for this system.25 The training

  15. General bulk service queueing system with N-policy, multiplevacations, setup time and server breakdown without interruption

    NASA Astrophysics Data System (ADS)

    Sasikala, S.; Indhira, K.; Chandrasekaran, V. M.

    2017-11-01

    In this paper, we have considered an MX / (a,b) / 1 queueing system with server breakdown without interruption, multiple vacations, setup times and N-policy. After a batch of service, if the size of the queue is ξ (< a), then the server immediately takes a vacation. Upon returns from a vacation, if the queue is less than N, then the server takes another vacation. This process continues until the server finds atleast N customers in the queue. After a vacation, if the server finds at least N customers waiting for service, then the server needs a setup time to start the service. After a batch of service, if the amount of waiting customers in the queue is ξ (≥ a) then the server serves a batch of min(ξ,b) customers, where b ≥ a. We derived the probability generating function of queue length at arbitrary time epoch. Further, we obtained some important performance measures.

  16. Secure entanglement distillation for double-server blind quantum computation.

    PubMed

    Morimae, Tomoyuki; Fujii, Keisuke

    2013-07-12

    Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client's input, output, and program. If the client interacts with only a single server, the client has to have some minimum quantum power, such as the ability of emitting randomly rotated single-qubit states or the ability of measuring states. If the client interacts with two servers who share Bell pairs but cannot communicate with each other, the client can be completely classical. For such a double-server scheme, two servers have to share clean Bell pairs, and therefore the entanglement distillation is necessary in a realistic noisy environment. In this Letter, we show that it is possible to perform entanglement distillation in the double-server scheme without degrading the security of blind quantum computing.

  17. Assessment of the psychometrics of a PROMIS item bank: self-efficacy for managing daily activities

    PubMed Central

    Hong, Ickpyo; Li, Chih-Ying; Romero, Sergio; Gruber-Baldini, Ann L.; Shulman, Lisa M.

    2017-01-01

    Purpose The aim of this study is to investigate the psychometrics of the Patient-Reported Outcomes Measurement Information System self-efficacy for managing daily activities item bank. Methods The item pool was field tested on a sample of 1087 participants via internet (n = 250) and in-clinic (n = 837) surveys. All participants reported having at least one chronic health condition. The 35 item pool was investigated for dimensionality (confirmatory factor analyses, CFA and exploratory factor analysis, EFA), item-total correlations, local independence, precision, and differential item functioning (DIF) across gender, race, ethnicity, age groups, data collection modes, and neurological chronic conditions (McFadden Pseudo R2 less than 10 %). Results The item pool met two of the four CFA fit criteria (CFI = 0.952 and SRMR = 0.07). EFA analysis found a dominant first factor (eigenvalue = 24.34) and the ratio of first to second eigenvalue was 12.4. The item pool demonstrated good item-total correlations (0.59–0.85) and acceptable internal consistency (Cronbach’s alpha = 0.97). The item pool maintained its precision (reliability over 0.90) across a wide range of theta (3.70), and there was no significant DIF. Conclusion The findings indicated the item pool has sound psychometric properties and the test items are eligible for development of computerized adaptive testing and short forms. PMID:27048495

  18. Assessment of the psychometrics of a PROMIS item bank: self-efficacy for managing daily activities.

    PubMed

    Hong, Ickpyo; Velozo, Craig A; Li, Chih-Ying; Romero, Sergio; Gruber-Baldini, Ann L; Shulman, Lisa M

    2016-09-01

    The aim of this study is to investigate the psychometrics of the Patient-Reported Outcomes Measurement Information System self-efficacy for managing daily activities item bank. The item pool was field tested on a sample of 1087 participants via internet (n = 250) and in-clinic (n = 837) surveys. All participants reported having at least one chronic health condition. The 35 item pool was investigated for dimensionality (confirmatory factor analyses, CFA and exploratory factor analysis, EFA), item-total correlations, local independence, precision, and differential item functioning (DIF) across gender, race, ethnicity, age groups, data collection modes, and neurological chronic conditions (McFadden Pseudo R (2) less than 10 %). The item pool met two of the four CFA fit criteria (CFI = 0.952 and SRMR = 0.07). EFA analysis found a dominant first factor (eigenvalue = 24.34) and the ratio of first to second eigenvalue was 12.4. The item pool demonstrated good item-total correlations (0.59-0.85) and acceptable internal consistency (Cronbach's alpha = 0.97). The item pool maintained its precision (reliability over 0.90) across a wide range of theta (3.70), and there was no significant DIF. The findings indicated the item pool has sound psychometric properties and the test items are eligible for development of computerized adaptive testing and short forms.

  19. Parallel tagged next-generation sequencing on pooled samples - a new approach for population genetics in ecology and conservation.

    PubMed

    Zavodna, Monika; Grueber, Catherine E; Gemmell, Neil J

    2013-01-01

    Next-generation sequencing (NGS) on pooled samples has already been broadly applied in human medical diagnostics and plant and animal breeding. However, thus far it has been only sparingly employed in ecology and conservation, where it may serve as a useful diagnostic tool for rapid assessment of species genetic diversity and structure at the population level. Here we undertake a comprehensive evaluation of the accuracy, practicality and limitations of parallel tagged amplicon NGS on pooled population samples for estimating species population diversity and structure. We obtained 16S and Cyt b data from 20 populations of Leiopelma hochstetteri, a frog species of conservation concern in New Zealand, using two approaches - parallel tagged NGS on pooled population samples and individual Sanger sequenced samples. Data from each approach were then used to estimate two standard population genetic parameters, nucleotide diversity (π) and population differentiation (FST), that enable population genetic inference in a species conservation context. We found a positive correlation between our two approaches for population genetic estimates, showing that the pooled population NGS approach is a reliable, rapid and appropriate method for population genetic inference in an ecological and conservation context. Our experimental design also allowed us to identify both the strengths and weaknesses of the pooled population NGS approach and outline some guidelines and suggestions that might be considered when planning future projects.

  20. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.

  1. Honey Bee Colonies Remote Monitoring System.

    PubMed

    Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús

    2016-12-29

    Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees' work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive-monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time.

  2. WEBnm@ v2.0: Web server and services for comparing protein flexibility.

    PubMed

    Tiwari, Sandhya P; Fuglebakk, Edvin; Hollup, Siv M; Skjærven, Lars; Cragnolini, Tristan; Grindhaug, Svenn H; Tekle, Kidane M; Reuter, Nathalie

    2014-12-30

    Normal mode analysis (NMA) using elastic network models is a reliable and cost-effective computational method to characterise protein flexibility and by extension, their dynamics. Further insight into the dynamics-function relationship can be gained by comparing protein motions between protein homologs and functional classifications. This can be achieved by comparing normal modes obtained from sets of evolutionary related proteins. We have developed an automated tool for comparative NMA of a set of pre-aligned protein structures. The user can submit a sequence alignment in the FASTA format and the corresponding coordinate files in the Protein Data Bank (PDB) format. The computed normalised squared atomic fluctuations and atomic deformation energies of the submitted structures can be easily compared on graphs provided by the web user interface. The web server provides pairwise comparison of the dynamics of all proteins included in the submitted set using two measures: the Root Mean Squared Inner Product and the Bhattacharyya Coefficient. The Comparative Analysis has been implemented on our web server for NMA, WEBnm@, which also provides recently upgraded functionality for NMA of single protein structures. This includes new visualisations of protein motion, visualisation of inter-residue correlations and the analysis of conformational change using the overlap analysis. In addition, programmatic access to WEBnm@ is now available through a SOAP-based web service. Webnm@ is available at http://apps.cbu.uib.no/webnma . WEBnm@ v2.0 is an online tool offering unique capability for comparative NMA on multiple protein structures. Along with a convenient web interface, powerful computing resources, and several methods for mode analyses, WEBnm@ facilitates the assessment of protein flexibility within protein families and superfamilies. These analyses can give a good view of how the structures move and how the flexibility is conserved over the different structures.

  3. A multi-sensor RSS spatial sensing-based robust stochastic optimization algorithm for enhanced wireless tethering.

    PubMed

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-12-12

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  4. Prospective study of clinician-entered research data in the Emergency Department using an Internet-based system after the HIPAA Privacy Rule

    PubMed Central

    Kline, Jeffrey A; Johnson, Charles L; Webb, William B; Runyon, Michael S

    2004-01-01

    Background Design and test the reliability of a web-based system for multicenter, real-time collection of data in the emergency department (ED), under waiver of authorization, in compliance with HIPAA. Methods This was a phase I, two-hospital study of patients undergoing evaluation for possible pulmonary embolism. Data were collected by on-duty clinicians on an HTML data collection form (prospective e-form), populated using either a personal digital assistant (PDA) or personal computer (PC). Data forms were uploaded to a central, offsite server using secure socket protocol transfer. Each form was assigned a unique identifier, and all PHI data were encrypted, but were password-accessible by authorized research personnel to complete a follow-up e-form. Results From April 15, 2003-April 15 2004, 1022 prospective e-forms and 605 follow-up e-forms were uploaded. Complexities of PDA use compelled clinicians to use PCs in the ED for data entry for most forms. No data were lost and server log query revealed no unauthorized entry. Prospectively obtained PHI data, encrypted upon server upload, were successfully decrypted using password-protected access to allow follow-up without difficulty in 605 cases. Non-PHI data from prospective and follow-up forms were available to the study investigators via standard file transfer protocol. Conclusions Data can be accurately collected from on-duty clinicians in the ED using real-time, PC-Internet data entry in compliance with the Privacy Rule. Deidentification-reidentification of PHI was successfully accomplished by a password-protected encryption-deencryption mechanism to permit follow-up by approved research personnel. PMID:15479471

  5. Honey Bee Colonies Remote Monitoring System

    PubMed Central

    Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús

    2016-01-01

    Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees’ work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive—monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time. PMID:28036061

  6. High Availability Applications for NOMADS at the NOAA Web Operations Center Aimed at Providing Reliable Real Time Access to Operational Model Data

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Rutledge, G.; Wang, J.; Freeman, P.; Kang, C. Y.

    2009-05-01

    The NOAA Operational Modeling Archive Distribution System (NOMADS) is now delivering high availability services as part of NOAA's official real time data dissemination at its Web Operations Center (WOC). The WOC is a web service used by all organizational units in NOAA and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. New applications to access data and observations for verification of gridded model output, and progress toward integration with access to conventional and non-conventional observations will be discussed. We will demonstrate how users can use NOMADS services to repackage area subsets either using repackaging of GRIB2 files, or values selected by ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6- Dimensional analysis services across the internet.

  7. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    PubMed Central

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  8. Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D.; Baltzer, T.

    2005-12-01

    The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf

  9. Providing Internet Access to High-Resolution Lunar Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  10. Academic and Recreational Reading Motivation of Teacher Candidates

    ERIC Educational Resources Information Center

    Lancellot, Michael

    2017-01-01

    The purpose of this mixed methods study was to determine relationships among teacher candidates' academic and recreational reading motivation. This study utilized a previously designed, reliable, and valid instrument called the Adult Reading Motivation Scale with permission from Schutte and Malouff (2007). The instrument included a pool of 50…

  11. Student Engagement Scale: Development, Reliability and Validity

    ERIC Educational Resources Information Center

    Gunuc, Selim; Kuzu, Abdullah

    2015-01-01

    In this study, the purpose was to develop a student engagement scale for higher education. The participants were 805 students. In the process of developing the item pool regarding the scale, related literature was examined in detail and interviews were held. Six factors--valuing, sense of belonging, cognitive engagement, peer relationships…

  12. Wide Area Information Servers: An Executive Information System for Unstructured Files.

    ERIC Educational Resources Information Center

    Kahle, Brewster; And Others

    1992-01-01

    Describes the Wide Area Information Servers (WAIS) system, an integrated information retrieval system for corporate end users. Discussion covers general characteristics of the system, search techniques, protocol development, user interfaces, servers, selective dissemination of information, nontextual data, access to other servers, and description…

  13. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  14. Asynchronous data change notification between database server and accelerator controls system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, W.; Morris, J.; Nemesure, S.

    2011-10-10

    Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to anymore » client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.« less

  15. Employing machine learning for reliable miRNA target identification in plants

    PubMed Central

    2011-01-01

    Background miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. Result In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. Conclusion A machine learning multivariate feature tool has been implemented in parallel and locally installable form, for plant miRNA target identification. The performance was assessed and compared through comprehensive testing and benchmarking, suggesting a reliable performance and gross usability for transcriptome wide plant miRNA target identification. PMID:22206472

  16. Employing machine learning for reliable miRNA target identification in plants.

    PubMed

    Jha, Ashwani; Shankar, Ravi

    2011-12-29

    miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. A machine learning multivariate feature tool has been implemented in parallel and locally installable form, for plant miRNA target identification. The performance was assessed and compared through comprehensive testing and benchmarking, suggesting a reliable performance and gross usability for transcriptome wide plant miRNA target identification.

  17. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  18. Anatomy of an anesthesia information management system.

    PubMed

    Shah, Nirav J; Tremper, Kevin K; Kheterpal, Sachin

    2011-09-01

    Anesthesia information management systems (AIMS) have become more prevalent as more sophisticated hardware and software have increased usability and reliability. National mandates and incentives have driven adoption as well. AIMS can be developed in one of several software models (Web based, client/server, or incorporated into a medical device). Irrespective of the development model, the best AIMS have a feature set that allows for comprehensive management of workflow for an anesthesiologist. Key features include preoperative, intraoperative, and postoperative documentation; quality assurance; billing; compliance and operational reporting; patient and operating room tracking; and integration with hospital electronic medical records. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Cyber Security and Reliability in a Digital Cloud

    DTIC Science & Technology

    2013-01-01

    a higher utilization of servers, lower professional support staff needs, economies of scale for the physical facility, and the flexibility to locate...as  a  system,  the  DoD  can  achieve  the  economies  of scale typically associated with large data centers.  Recommendation 3: The DoD CIO and DISA...providers will help set  standards for secure cloud computing across the  economy .  Recommendation 7: The DoD CIO and DISA should participate in the

  20. Condor-COPASI: high-throughput computing for biochemical networks

    PubMed Central

    2012-01-01

    Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945

  1. GRAMM-X public web server for protein–protein docking

    PubMed Central

    Tovchigrechko, Andrey; Vakser, Ilya A.

    2006-01-01

    Protein docking software GRAMM-X and its web interface () extend the original GRAMM Fast Fourier Transformation methodology by employing smoothed potentials, refinement stage, and knowledge-based scoring. The web server frees users from complex installation of database-dependent parallel software and maintaining large hardware resources needed for protein docking simulations. Docking problems submitted to GRAMM-X server are processed by a 320 processor Linux cluster. The server was extensively tested by benchmarking, several months of public use, and participation in the CAPRI server track. PMID:16845016

  2. Improving STEM Education and Workforce Development by the Inclusion of Research Experiences in the Curriculum at SWC

    DTIC Science & Technology

    2016-06-08

    server environment. While the college’s two Cisco blade -servers are located in separate buildings, these 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...college’s two Cisco blade -servers are located in separate buildings, these units now work as one unit. Critical databases and software packages are...server environment. While the college’s two Cisco blade -servers are located in separate buildings, these units now work as one unit. Critical

  3. Scaling NS-3 DCE Experiments on Multi-Core Servers

    DTIC Science & Technology

    2016-06-15

    that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on

  4. Blade runner. Blade server and virtualization technology can help hospitals save money--but they are far from silver bullets.

    PubMed

    Lawrence, Daphne

    2009-03-01

    Blade servers and virtualization can reduce infrastructure, maintenance, heating, electric, cooling and equipment costs. Blade server technology is evolving and some elements may become obsolete. There is very little interoperability between blades. Hospitals can virtualize 40 to 60 percent of their servers, and old servers can be reused for testing. Not all applications lend themselves to virtualization--especially those with high memory requirements. CIOs should engage their vendors in virtualization discussions.

  5. A Scalability Model for ECS's Data Server

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Singhal, Mukesh

    1998-01-01

    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  6. On the optimal use of a slow server in two-stage queueing systems

    NASA Astrophysics Data System (ADS)

    Papachristos, Ioannis; Pandelis, Dimitrios G.

    2017-07-01

    We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.

  7. Process evaluation distributed system

    NASA Technical Reports Server (NTRS)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  8. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    PubMed

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.

  9. DelPhi web server v2: incorporating atomic-style geometrical figures into the computational protocol.

    PubMed

    Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil

    2012-06-15

    A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.

  10. 78 FR 48821 - Energy Conservation Program for Consumer Products and Certain Commercial and Industrial Equipment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-12

    ... Commercial and Industrial Equipment: Proposed Determination of Computer Servers as a Covered Consumer Product... comments on the proposed determination that computer servers (servers) qualify as a covered product. DATES: The comment period for the proposed determination relating to servers published on July 12, 2013 (78...

  11. ASPEN--A Web-Based Application for Managing Student Server Accounts

    ERIC Educational Resources Information Center

    Sandvig, J. Christopher

    2004-01-01

    The growth of the Internet has greatly increased the demand for server-side programming courses at colleges and universities. Students enrolled in such courses must be provided with server-based accounts that support the technologies that they are learning. The process of creating, managing and removing large numbers of student server accounts is…

  12. How to securely replicate services

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth

    1992-01-01

    A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by n servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter k, at least k servers are correct and fewer than k servers are corrupt. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires fewer than k servers to be corrupt and that is live if at least k+b servers are correct, where b is the assumed maximum total number of corrupt servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service. The practicality of these schemes is illustrated through a discussion of several issues pertinent to their implementation and use, and their intended role in a secure version of the Isis system is also described.

  13. Optimal Self-Tuning PID Controller Based on Low Power Consumption for a Server Fan Cooling System.

    PubMed

    Lee, Chengming; Chen, Rongshun

    2015-05-20

    Recently, saving the cooling power in servers by controlling the fan speed has attracted considerable attention because of the increasing demand for high-density servers. This paper presents an optimal self-tuning proportional-integral-derivative (PID) controller, combining a PID neural network (PIDNN) with fan-power-based optimization in the transient-state temperature response in the time domain, for a server fan cooling system. Because the thermal model of the cooling system is nonlinear and complex, a server mockup system simulating a 1U rack server was constructed and a fan power model was created using a third-order nonlinear curve fit to determine the cooling power consumption by the fan speed control. PIDNN with a time domain criterion is used to tune all online and optimized PID gains. The proposed controller was validated through experiments of step response when the server operated from the low to high power state. The results show that up to 14% of a server's fan cooling power can be saved if the fan control permits a slight temperature response overshoot in the electronic components, which may provide a time-saving strategy for tuning the PID controller to control the server fan speed during low fan power consumption.

  14. Performance of a distributed superscalar storage server

    NASA Technical Reports Server (NTRS)

    Finestead, Arlan; Yeager, Nancy

    1993-01-01

    The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.

  15. Modeling of the Weld Shape Development During the Autogenous Welding Process by Coupling Welding Arc with Weld Pool

    NASA Astrophysics Data System (ADS)

    Dong, Wenchao; Lu, Shanping; Li, Dianzhong; Li, Yiyi

    2010-10-01

    A numerical model of the welding arc is coupled to a model for the heat transfer and fluid flow in the weld pool of a SUS304 stainless steel during a moving GTA welding process. The described model avoids the use of the assumption of the empirical Gaussian boundary conditions, and at the same time, provides reliable boundary conditions to analyze the weld pool. Based on the two-dimensional axisymmetric numerical modeling of the argon arc, the heat flux to workpiece, the input current density, and the plasma drag stress are obtained. The arc temperature contours, the distributions of heat flux, and current density at the anode are in fair agreement with the reported experimental results. Numerical simulation and experimental studies to the weld pool development are carried out for a moving GTA welding on SUS304 stainless steel with different oxygen content from 30 to 220 ppm. The calculated result show that the oxygen can change the Marangoni convection from outward to inward direction on the liquid pool surface and make the wide shallow weld shape become narrow deep one. The calculated result for the weld shape and weld D/W ratio agrees well with the experimental one.

  16. Installation of the National Transport Code Collaboration Data Server at the ITPA International Multi-tokamak Confinement Profile Database

    NASA Astrophysics Data System (ADS)

    Roach, Colin; Carlsson, Johan; Cary, John R.; Alexander, David A.

    2002-11-01

    The National Transport Code Collaboration (NTCC) has developed an array of software, including a data client/server. The data server, which is written in C++, serves local data (in the ITER Profile Database format) as well as remote data (by accessing one or several MDS+ servers). The client, a web-invocable Java applet, provides a uniform, intuitive, user-friendly, graphical interface to the data server. The uniformity of the interface relieves the user from the trouble of mastering the differences between different data formats and lets him/her focus on the essentials: plotting and viewing the data. The user runs the client by visiting a web page using any Java capable Web browser. The client is automatically downloaded and run by the browser. A reference to the data server is then retrieved via the standard Web protocol (HTTP). The communication between the client and the server is then handled by the mature, industry-standard CORBA middleware. CORBA has bindings for all common languages and many high-quality implementations are available (both Open Source and commercial). The NTCC data server has been installed at the ITPA International Multi-tokamak Confinement Profile Database, which is hosted by the UKAEA at Culham Science Centre. The installation of the data server is protected by an Internet firewall. To make it accessible to clients outside the firewall some modifications of the server were required. The working version of the ITPA confinement profile database is not open to the public. Authentification of legitimate users is done utilizing built-in Java security features to demand a password to download the client. We present an overview of the NTCC data client/server and some details of how the CORBA firewall-traversal issues were resolved and how the user authentification is implemented.

  17. LiveBench-1: continuous benchmarking of protein structure prediction servers.

    PubMed

    Bujnicki, J M; Elofsson, A; Fischer, D; Rychlewski, L

    2001-02-01

    We present a novel, continuous approach aimed at the large-scale assessment of the performance of available fold-recognition servers. Six popular servers were investigated: PDB-Blast, FFAS, T98-lib, GenTHREADER, 3D-PSSM, and INBGU. The assessment was conducted using as prediction targets a large number of selected protein structures released from October 1999 to April 2000. A target was selected if its sequence showed no significant similarity to any of the proteins previously available in the structural database. Overall, the servers were able to produce structurally similar models for one-half of the targets, but significantly accurate sequence-structure alignments were produced for only one-third of the targets. We further classified the targets into two sets: easy and hard. We found that all servers were able to find the correct answer for the vast majority of the easy targets if a structurally similar fold was present in the server's fold libraries. However, among the hard targets--where standard methods such as PSI-BLAST fail--the most sensitive fold-recognition servers were able to produce similar models for only 40% of the cases, half of which had a significantly accurate sequence-structure alignment. Among the hard targets, the presence of updated libraries appeared to be less critical for the ranking. An "ideally combined consensus" prediction, where the results of all servers are considered, would increase the percentage of correct assignments by 50%. Each server had a number of cases with a correct assignment, where the assignments of all the other servers were wrong. This emphasizes the benefits of considering more than one server in difficult prediction tasks. The LiveBench program (http://BioInfo.PL/LiveBench) is being continued, and all interested developers are cordially invited to join.

  18. The HydroServer Platform for Sharing Hydrologic Data

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its parts in advancing hydrologic research. Details of the CUAHSI HIS can be found at http://his.cuahsi.org, and HydroServer codeplex site http://hydroserver.codeplex.com.

  19. PolarHub: A Global Hub for Polar Data Discovery

    NASA Astrophysics Data System (ADS)

    Li, W.

    2014-12-01

    This paper reports the outcome of a NSF project in developing a large-scale web crawler PolarHub to discover automatically the distributed polar dataset in the format of OGC web services (OWS) in the cyberspace. PolarHub is a machine robot; its goal is to visit as many webpages as possible to find those containing information about polar OWS, extract this information and store it into the backend data repository. This is a very challenging task given huge data volume of webpages on the Web. Three unique features was introduced in PolarHub to make it distinctive from earlier crawler solutions: (1) a multi-task, multi-user, multi-thread support to the crawling tasks; (2) an extensive use of thread pool and Data Access Object (DAO) design patterns to separate persistent data storage and business logic to achieve high extendibility of the crawler tool; (3) a pattern-matching based customizable crawling algorithm to support discovery of multi-type geospatial web services; and (4) a universal and portable client-server communication mechanism combining a server-push and client pull strategies for enhanced asynchronous processing. A series of experiments were conducted to identify the impact of crawling parameters to the overall system performance. The geographical distribution pattern of all PolarHub identified services is also demonstrated. We expect this work to make a major contribution to the field of geospatial information retrieval and geospatial interoperability, to bridge the gap between data provider and data consumer, and to accelerate polar science by enhancing the accessibility and reusability of adequate polar data.

  20. Group-oriented coordination models for distributed client-server computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Hughes, Craig S.

    1994-01-01

    This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.

  1. National Medical Terminology Server in Korea

    NASA Astrophysics Data System (ADS)

    Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee

    Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.

  2. CIVET: Continuous Integration, Verification, Enhancement, and Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alger, Brian; Gaston, Derek R.; Permann, Cody J

    A Git server (GitHub, GitLab, BitBucket) sends event notifications to the Civet server. These are either a " Pull Request" or a "Push" notification. Civet then checks the database to determine what tests need to be run and marks them as ready to run. Civet clients, running on dedicated machines, query the server for available jobs that are ready to run. When a client gets a job it executes the scripts attached to the job and report back to the server the output and exit status. When the client updates the server, the server will also update the Git servermore » with the result of the job, as well as updating the main web page.« less

  3. Metrics for Assessing the Reliability of a Telemedicine Remote Monitoring System

    PubMed Central

    Fox, Mark; Papadopoulos, Amy; Crump, Cindy

    2013-01-01

    Abstract Objective: The goal of this study was to assess using new metrics the reliability of a real-time health monitoring system in homes of older adults. Materials and Methods: The “MobileCare Monitor” system was installed into the homes of nine older adults >75 years of age for a 2-week period. The system consisted of a wireless wristwatch-based monitoring system containing sensors for location, temperature, and impacts and a “panic” button that was connected through a mesh network to third-party wireless devices (blood pressure cuff, pulse oximeter, weight scale, and a survey-administering device). To assess system reliability, daily phone calls instructed participants to conduct system tests and reminded them to fill out surveys and daily diaries. Phone reports and participant diary entries were checked against data received at a secure server. Results: Reliability metrics assessed overall system reliability, data concurrence, study effectiveness, and system usability. Except for the pulse oximeter, system reliability metrics varied between 73% and 92%. Data concurrence for proximal and distal readings exceeded 88%. System usability following the pulse oximeter firmware update varied between 82% and 97%. An estimate of watch-wearing adherence within the home was quite high, about 80%, although given the inability to assess watch-wearing when a participant left the house, adherence likely exceeded the 10 h/day requested time. In total, 3,436 of 3,906 potential measurements were obtained, indicating a study effectiveness of 88%. Conclusions: The system was quite effective in providing accurate remote health data. The different system reliability measures identify important error sources in remote monitoring systems. PMID:23611640

  4. Development of the Self-Directed Learning Skills Scale

    ERIC Educational Resources Information Center

    Ayyildiz, Yildizay; Tarhan, Leman

    2015-01-01

    The purpose of this study was to develop a valid and reliable scale for assessing high school students' self-directed learning skills. Based on a literature review and data obtained from similar instruments, all skills related to self-directed learning were identified. Next, an item pool was prepared and administered to 255 students from various…

  5. [Environmental Education Units.

    ERIC Educational Resources Information Center

    Minneapolis Independent School District 275, Minn.

    Two of these three pamphlets describe methods of teaching young elementary school children the principles of sampling. Tiles of five colors are added to a tub and children sample these randomly; using the tiles as units for a graph, they draw a representation of the population. Pooling results leads to a more reliable sample. Practice is given in…

  6. Livers provide a reliable matrix for real-time PCR confirmation of avian botulism.

    PubMed

    Le Maréchal, Caroline; Ballan, Valentine; Rouxel, Sandra; Bayon-Auboyer, Marie-Hélène; Baudouard, Marie-Agnès; Morvan, Hervé; Houard, Emmanuelle; Poëzevara, Typhaine; Souillard, Rozenn; Woudstra, Cédric; Le Bouquin, Sophie; Fach, Patrick; Chemaly, Marianne

    2016-04-01

    Diagnosis of avian botulism is based on clinical symptoms, which are indicative but not specific. Laboratory investigations are therefore required to confirm clinical suspicions and establish a definitive diagnosis. Real-time PCR methods have recently been developed for the detection of Clostridium botulinum group III producing type C, D, C/D or D/C toxins. However, no study has been conducted to determine which types of matrices should be analyzed for laboratory confirmation using this approach. This study reports on the comparison of different matrices (pooled intestinal contents, livers, spleens and cloacal swabs) for PCR detection of C. botulinum. Between 2013 and 2015, 63 avian botulism suspicions were tested and 37 were confirmed as botulism. Analysis of livers using real-time PCR after enrichment led to the confirmation of 97% of the botulism outbreaks. Using the same method, spleens led to the confirmation of 90% of botulism outbreaks, cloacal swabs of 93% and pooled intestinal contents of 46%. Liver appears to be the most reliable type of matrix for laboratory confirmation using real-time PCR analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. The NTID speech recognition test: NSRT(®).

    PubMed

    Bochner, Joseph H; Garrison, Wayne M; Doherty, Karen A

    2015-07-01

    The purpose of this study was to collect and analyse data necessary for expansion of the NSRT item pool and to evaluate the NSRT adaptive testing software. Participants were administered pure-tone and speech recognition tests including W-22 and QuickSIN, as well as a set of 323 new NSRT items and NSRT adaptive tests in quiet and background noise. Performance on the adaptive tests was compared to pure-tone thresholds and performance on other speech recognition measures. The 323 new items were subjected to Rasch scaling analysis. Seventy adults with mild to moderately severe hearing loss participated in this study. Their mean age was 62.4 years (sd = 20.8). The 323 new NSRT items fit very well with the original item bank, enabling the item pool to be more than doubled in size. Data indicate high reliability coefficients for the NSRT and moderate correlations with pure-tone thresholds (PTA and HFPTA) and other speech recognition measures (W-22, QuickSIN, and SRT). The adaptive NSRT is an efficient and effective measure of speech recognition, providing valid and reliable information concerning respondents' speech perception abilities.

  8. Research and design of smart grid monitoring control via terminal based on iOS system

    NASA Astrophysics Data System (ADS)

    Fu, Wei; Gong, Li; Chen, Heli; Pan, Guangji

    2017-06-01

    Aiming at a series of problems existing in current smart grid monitoring Control Terminal, such as high costs, poor portability, simple monitoring system, poor software extensions, low system reliability when transmitting information, single man-machine interface, poor security, etc., smart grid remote monitoring system based on the iOS system has been designed. The system interacts with smart grid server so that it can acquire grid data through WiFi/3G/4G networks, and monitor each grid line running status, as well as power plant equipment operating conditions. When it occurs an exception in the power plant, incident information can be sent to the user iOS terminal equipment timely, which will provide troubleshooting information to help the grid staff to make the right decisions in a timely manner, to avoid further accidents. Field tests have shown the system realizes the integrated grid monitoring functions, low maintenance cost, friendly interface, high security and reliability, and it possesses certain applicable value.

  9. Using Item Response Theory to Develop a 60-Item Representation of the NEO PI-R Using the International Personality Item Pool: Development of the IPIP-NEO-60.

    PubMed

    Maples-Keller, Jessica L; Williamson, Rachel L; Sleep, Chelsea E; Carter, Nathan T; Campbell, W Keith; Miller, Joshua D

    2017-10-31

    Given advantages of freely available and modifiable measures, an increase in the use of measures developed from the International Personality Item Pool (IPIP), including the 300-item representation of the Revised NEO Personality Inventory (NEO PI-R; Costa & McCrae, 1992a ) has occurred. The focus of this study was to use item response theory to develop a 60-item, IPIP-based measure of the Five-Factor Model (FFM) that provides equal representation of the FFM facets and to test the reliability and convergent and criterion validity of this measure compared to the NEO Five Factor Inventory (NEO-FFI). In an undergraduate sample (n = 359), scores from the NEO-FFI and IPIP-NEO-60 demonstrated good reliability and convergent validity with the NEO PI-R and IPIP-NEO-300. Additionally, across criterion variables in the undergraduate sample as well as a community-based sample (n = 757), the NEO-FFI and IPIP-NEO-60 demonstrated similar nomological networks across a wide range of external variables (r ICC = .96). Finally, as expected, in an MTurk sample the IPIP-NEO-60 demonstrated advantages over the Big Five Inventory-2 (Soto & John, 2017 ; n = 342) with regard to the Agreeableness domain content. The results suggest strong reliability and validity of the IPIP-NEO-60 scores.

  10. WMS Server 2.0

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Wood, James F.

    2012-01-01

    This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.

  11. Virtual network computing: cross-platform remote display and collaboration software.

    PubMed

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  12. How to securely replicate services (preliminary version)

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth

    1992-01-01

    A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients being corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by 'n' servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter, k, at least k servers are correct and fewer than k servers are correct. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires that fewer than k servers are corrupt and, to ensure liveness, that k is less than or = n - 2t, where t is the assumed maximum total number of both corruptions and benign failures suffered by servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service.

  13. Autoplot and the HAPI Server

    NASA Astrophysics Data System (ADS)

    Faden, J.; Vandegriff, J. D.; Weigel, R. S.

    2016-12-01

    Autoplot was introduced in 2008 as an easy-to-use plotting tool for the space physics community. It reads data from a variety of file resources, such as CDF and HDF files, and a number of specialized data servers, such as the PDS/PPI's DIT-DOS, CDAWeb, and from the University of Iowa's RPWG Das2Server. Each of these servers have optimized methods for transmitting data to display in Autoplot, but require coordination and specialized software to work, limiting Autoplot's ability to access new servers and datasets. Likewise, groups who would like software to access their APIs must either write thier own clients, or publish a specification document in hopes that people will write clients. The HAPI specification was written so that a simple, standard API could be used by both Autoplot and server implementations, to remove these barriers to free flow of time series data. Autoplot's software for communicating with HAPI servers is presented, showing the user interface scientists will use, and how data servers might implement the HAPI specification to provide access to their data. This will also include instructions on how Autoplot is used and installed desktop computers, and used to view data from the RBSP, Juno, and other missions.

  14. Providing Internet Access to High-Resolution Mars Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  15. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less

  16. An assessment of burn prevention knowledge in a high burn-risk environment: restaurants.

    PubMed

    Piazza-Waggoner, Carrie; Adams, C D; Goldfarb, I W; Slater, H

    2002-01-01

    Our facility has seen an increase in the number of cases of children burned in restaurants. Fieldwork has revealed many unsafe serving practices in restaurants in our tristate area. The current research targets what appears to be an underexamined burn-risk environment, restaurants, to examine server knowledge about burn prevention and burn care with customers. Participants included 71 local restaurant servers and 53 servers from various restaurants who were recruited from undergraduate courses. All participants completed a brief demographic form as well as a Burn Knowledge Questionnaire. It was found that server knowledge was low (ie, less than 50% accuracy). Yet, most servers reported that they felt customer burn safety was important enough to change the way that they serve. Additionally, it was found that length of time employed as a server was a significant predictor of servers' burn knowledge (ie, more years serving associated with higher knowledge). Finally, individual items were examined to identify potential targets for developing prevention programs.

  17. xDSL connection monitor

    DOEpatents

    Horton, John J.

    2006-04-11

    A system and method of maintaining communication between a computer and a server, the server being in communication with the computer via xDSL service or dial-up modem service, with xDSL service being the default mode of communication, the method including sending a request to the server via xDSL service to which the server should respond and determining if a response has been received. If no response has been received, displaying on the computer a message (i) indicating that xDSL service has failed and (ii) offering to establish communication between the computer and the server via the dial-up modem, and thereafter changing the default mode of communication between the computer and the server to dial-up modem service. In a preferred embodiment, an xDSL service provider monitors dial-up modem communications and determines if the computer dialing in normally establishes communication with the server via xDSL service. The xDSL service provider can thus quickly and easily detect xDSL failures.

  18. Rclick: a web server for comparison of RNA 3D structures.

    PubMed

    Nguyen, Minh N; Verma, Chandra

    2015-03-15

    RNA molecules play important roles in key biological processes in the cell and are becoming attractive for developing therapeutic applications. Since the function of RNA depends on its structure and dynamics, comparing and classifying the RNA 3D structures is of crucial importance to molecular biology. In this study, we have developed Rclick, a web server that is capable of superimposing RNA 3D structures by using clique matching and 3D least-squares fitting. Our server Rclick has been benchmarked and compared with other popular servers and methods for RNA structural alignments. In most cases, Rclick alignments were better in terms of structure overlap. Our server also recognizes conformational changes between structures. For this purpose, the server produces complementary alignments to maximize the extent of detectable similarity. Various examples showcase the utility of our web server for comparison of RNA, RNA-protein complexes and RNA-ligand structures. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Over two hundred million injuries to anterior teeth attributable to large overjet: a meta-analysis.

    PubMed

    Petti, Stefano

    2015-02-01

    The association between large overjet and traumatic dental injuries (TDIs) to anterior teeth is documented. However, observational studies are discrepant and generalizability (i.e. external validity) of meta-analyses is limited. Therefore, this meta-analysis sought to reconcile such discrepancies seeking to provide reliable risk estimates which could be generalizable at global level. Literature search (years 1990-2014) was performed (Scopus, GOOGLE Scholar, Medline). Selected primary studies were divided into subsets: 'primary teeth, overjet threshold 3-4 mm' (Primary3); 'permanent teeth, overjet threshold 3-4 mm' (Permanent3); 'permanent teeth, overjet threshold 6 ± 1 mm' (Permanent6). The adjusted odds ratios (ORs) were extracted. To obtain the highest level of reliability (i.e. internal validity), the pooled OR estimates were assessed accounting for between-study heterogeneity, publication bias and confounding. Result robustness was investigated with sensitivity and subgroup analyses. Fifty-four primary studies from Africa, America, Asia and Europe were included. The sampled individuals were children, adolescents and adults. Overall, there were >10 000 patients with TDI. The pooled OR estimates resulted 2.31 (95% confidence interval - 95CI, 1.01-5.27), 2.01 (95CI, 1.39-2.91) and 2.24 (95CI, 1.56-3.21) for Primary3, Permanent3 and Permant6, respectively. Sensitivity and subgroup analyses corroborated these estimates. Reliability and generalizability of pooled ORs were high enough and made it possible to assess that the fraction of global TDIs attributable to large overjet is 21.8% (95CI, 9.7-34.5%) and that large overjet is co-responsible for 235 008 000 global TDI cases (95CI, 104,760,000-372,168,000). This high global burden of TDI suggests that preventive measures must be implemented in patients with large overjet. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    NASA Astrophysics Data System (ADS)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  1. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less

  2. Oligonucleotide Based Magnetic Bead Capture of Onchocerca volvulus DNA for PCR Pool Screening of Vector Black Flies

    PubMed Central

    Gopal, Hemavathi; Hassan, Hassan K.; Rodríguez-Pérez, Mario A.; Toé, Laurent D.; Lustigman, Sara; Unnasch, Thomas R.

    2012-01-01

    Background Entomological surveys of Simulium vectors are an important component in the criteria used to determine if Onchocerca volvulus transmission has been interrupted and if focal elimination of the parasite has been achieved. However, because infection in the vector population is quite rare in areas where control has succeeded, large numbers of flies need to be examined to certify transmission interruption. Currently, this is accomplished through PCR pool screening of large numbers of flies. The efficiency of this process is limited by the size of the pools that may be screened, which is in turn determined by the constraints imposed by the biochemistry of the assay. The current method of DNA purification from pools of vector black flies relies upon silica adsorption. This method can be applied to screen pools containing a maximum of 50 individuals (from the Latin American vectors) or 100 individuals (from the African vectors). Methodology/Principal Findings We have evaluated an alternative method of DNA purification for pool screening of black flies which relies upon oligonucleotide capture of Onchocerca volvulus genomic DNA from homogenates prepared from pools of Latin American and African vectors. The oligonucleotide capture assay was shown to reliably detect one O. volvulus infective larva in pools containing 200 African or Latin American flies, representing a two-four fold improvement over the conventional assay. The capture assay requires an equivalent amount of technical time to conduct as the conventional assay, resulting in a two-four fold reduction in labor costs per insect assayed and reduces reagent costs to $3.81 per pool of 200 flies, or less than $0.02 per insect assayed. Conclusions/Significance The oligonucleotide capture assay represents a substantial improvement in the procedure used to detect parasite prevalence in the vector population, a major metric employed in the process of certifying the elimination of onchocerciasis. PMID:22724041

  3. Implementing TCP/IP and a socket interface as a server in a message-passing operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hipp, E.; Wiltzius, D.

    1990-03-01

    The UNICOS 4.3BSD network code and socket transport interface are the basis of an explicit network server for NLTSS, a message passing operating system on the Cray YMP. A BSD socket user library provides access to the network server using an RPC mechanism. The advantages of this server methodology are its modularity and extensibility to migrate to future protocol suites (e.g. OSI) and transport interfaces. In addition, the network server is implemented in an explicit multi-tasking environment to take advantage of the Cray YMP multi-processor platform. 19 refs., 5 figs.

  4. Single-server blind quantum computation with quantum circuit model

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqian; Weng, Jian; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing; Song, Tingting

    2018-06-01

    Blind quantum computation (BQC) enables the client, who has few quantum technologies, to delegate her quantum computation to a server, who has strong quantum computabilities and learns nothing about the client's quantum inputs, outputs and algorithms. In this article, we propose a single-server BQC protocol with quantum circuit model by replacing any quantum gate with the combination of rotation operators. The trap quantum circuits are introduced, together with the combination of rotation operators, such that the server is unknown about quantum algorithms. The client only needs to perform operations X and Z, while the server honestly performs rotation operators.

  5. An Evaluation of Alternative Designs for a Grid Information Service

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Waheed, Abdul; Meyers, David; Yan, Jerry; Kwak, Dochan (Technical Monitor)

    2001-01-01

    The Globus information service wasn't working well. There were many updates of data from Globus daemons which saturated the single server and users couldn't retrieve information. We created a second server for NASA and Alliance. Things were great on that server, but a bit slow on the other server. We needed to know exactly how the information service was being used. What were the best servers and configurations? This viewgraph presentation gives an overview of the evaluation of alternative designs for a Grid Information Service. Details are given on the workload characterization, methodology used, and the performance evaluation.

  6. Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server

    DTIC Science & Technology

    2016-09-01

    ARL-TR-7798 ● SEP 2016 US Army Research Laboratory Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server...for the Applied Anomaly Detection Tool (AADT) Web Server by Christian D Schlesiger Computational and Information Sciences Directorate, ARL...SUBTITLE Setup Instructions for the Applied Anomaly Detection Tool (AADT) Web Server 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  7. PREDICT: Privacy and Security Enhancing Dynamic Information Monitoring

    DTIC Science & Technology

    2015-08-03

    consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided local...12], consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided...these methods achieve high sensing coverage with low cost using cloaked locations [3]. In follow-on work, the issue of mobility is addressed. Task

  8. Performance Modeling of the ADA Rendezvous

    DTIC Science & Technology

    1991-10-01

    queueing network of figure 2, SERVERTASK can complete only one rendezvous at a time. Thus, the rate that the rendezvous requests are processed at the... Network 1, SERVERTASK competes with the traffic tasks of Server Processor. Each time SERVERTASK gains access to the processor, SERVERTASK completes...Client Processor Server Processor Software Server Nek Netork2 Figure 10. A conceptualization of the algorithm. The SERVERTASK software server of Network 2

  9. Remote Adaptive Communication System

    DTIC Science & Technology

    2001-10-25

    manage several different devices using the software tool A. Client /Server Architecture The architecture we are proposing is based on the Client ...Server model (see figure 3). We want both client and server to be accessible from anywhere via internet. The computer, acting as a server, is in...the other hand, each of the client applications will act as sender or receiver, depending on the associated interface: user interface or device

  10. Predicting bacteriophage proteins located in host cell with feature selection technique.

    PubMed

    Ding, Hui; Liang, Zhi-Yong; Guo, Feng-Biao; Huang, Jian; Chen, Wei; Lin, Hao

    2016-04-01

    A bacteriophage is a virus that can infect a bacterium. The fate of an infected bacterium is determined by the bacteriophage proteins located in the host cell. Thus, reliably identifying bacteriophage proteins located in the host cell is extremely important to understand their functions and discover potential anti-bacterial drugs. Thus, in this paper, a computational method was developed to recognize bacteriophage proteins located in host cells based only on their amino acid sequences. The analysis of variance (ANOVA) combined with incremental feature selection (IFS) was proposed to optimize the feature set. Using a jackknife cross-validation, our method can discriminate between bacteriophage proteins located in a host cell and the bacteriophage proteins not located in a host cell with a maximum overall accuracy of 84.2%, and can further classify bacteriophage proteins located in host cell cytoplasm and in host cell membranes with a maximum overall accuracy of 92.4%. To enhance the value of the practical applications of the method, we built a web server called PHPred (〈http://lin.uestc.edu.cn/server/PHPred〉). We believe that the PHPred will become a powerful tool to study bacteriophage proteins located in host cells and to guide related drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Implementation of remote monitoring and managing switches

    NASA Astrophysics Data System (ADS)

    Leng, Junmin; Fu, Guo

    2010-12-01

    In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.

  12. miRanalyzer: an update on the detection and analysis of microRNAs in high-throughput sequencing experiments

    PubMed Central

    Hackenberg, Michael; Rodríguez-Ezpeleta, Naiara; Aransay, Ana M.

    2011-01-01

    We present a new version of miRanalyzer, a web server and stand-alone tool for the detection of known and prediction of new microRNAs in high-throughput sequencing experiments. The new version has been notably improved regarding speed, scope and available features. Alignments are now based on the ultrafast short-read aligner Bowtie (granting also colour space support, allowing mismatches and improving speed) and 31 genomes, including 6 plant genomes, can now be analysed (previous version contained only 7). Differences between plant and animal microRNAs have been taken into account for the prediction models and differential expression of both, known and predicted microRNAs, between two conditions can be calculated. Additionally, consensus sequences of predicted mature and precursor microRNAs can be obtained from multiple samples, which increases the reliability of the predicted microRNAs. Finally, a stand-alone version of the miRanalyzer that is based on a local and easily customized database is also available; this allows the user to have more control on certain parameters as well as to use specific data such as unpublished assemblies or other libraries that are not available in the web server. miRanalyzer is available at http://bioinfo2.ugr.es/miRanalyzer/miRanalyzer.php. PMID:21515631

  13. Building an organic block storage service at CERN with Ceph

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel; Wiebalck, Arne

    2014-06-01

    Emerging storage requirements, such as the need for block storage for both OpenStack VMs and file services like AFS and NFS, have motivated the development of a generic backend storage service for CERN IT. The goals for such a service include (a) vendor neutrality, (b) horizontal scalability with commodity hardware, (c) fault tolerance at the disk, host, and network levels, and (d) support for geo-replication. Ceph is an attractive option due to its native block device layer RBD which is built upon its scalable, reliable, and performant object storage system, RADOS. It can be considered an "organic" storage solution because of its ability to balance and heal itself while living on an ever-changing set of heterogeneous disk servers. This work will present the outcome of a petabyte-scale test deployment of Ceph by CERN IT. We will first present the architecture and configuration of our cluster, including a summary of best practices learned from the community and discovered internally. Next the results of various functionality and performance tests will be shown: the cluster has been used as a backend block storage system for AFS and NFS servers as well as a large OpenStack cluster at CERN. Finally, we will discuss the next steps and future possibilities for Ceph at CERN.

  14. Automated generation of a World Wide Web-based data entry and check program for medical applications.

    PubMed

    Kiuchi, T; Kaihara, S

    1997-02-01

    The World Wide Web-based form is a promising method for the construction of an on-line data collection system for clinical and epidemiological research. It is, however, laborious to prepare a common gateway interface (CGI) program for each project, which the World Wide Web server needs to handle the submitted data. In medicine, it is even more laborious because the CGI program must check deficits, type, ranges, and logical errors (bad combination of data) of entered data for quality assurance as well as data length and meta-characters of the entered data to enhance the security of the server. We have extended the specification of the hypertext markup language (HTML) form to accommodate information necessary for such data checking and we have developed software named AUTOFORM for this purpose. The software automatically analyzes the extended HTML form and generates the corresponding ordinary HTML form, 'Makefile', and C source of CGI programs. The resultant CGI program checks the entered data through the HTML form, records them in a computer, and returns them to the end-user. AUTOFORM drastically reduces the burden of development of the World Wide Web-based data entry system and allows the CGI programs to be more securely and reliably prepared than had they been written from scratch.

  15. GPU.proton.DOCK: Genuine Protein Ultrafast proton equilibria consistent DOCKing.

    PubMed

    Kantardjiev, Alexander A

    2011-07-01

    GPU.proton.DOCK (Genuine Protein Ultrafast proton equilibria consistent DOCKing) is a state of the art service for in silico prediction of protein-protein interactions via rigorous and ultrafast docking code. It is unique in providing stringent account of electrostatic interactions self-consistency and proton equilibria mutual effects of docking partners. GPU.proton.DOCK is the first server offering such a crucial supplement to protein docking algorithms--a step toward more reliable and high accuracy docking results. The code (especially the Fast Fourier Transform bottleneck and electrostatic fields computation) is parallelized to run on a GPU supercomputer. The high performance will be of use for large-scale structural bioinformatics and systems biology projects, thus bridging physics of the interactions with analysis of molecular networks. We propose workflows for exploring in silico charge mutagenesis effects. Special emphasis is given to the interface-intuitive and user-friendly. The input is comprised of the atomic coordinate files in PDB format. The advanced user is provided with a special input section for addition of non-polypeptide charges, extra ionogenic groups with intrinsic pK(a) values or fixed ions. The output is comprised of docked complexes in PDB format as well as interactive visualization in a molecular viewer. GPU.proton.DOCK server can be accessed at http://gpudock.orgchm.bas.bg/.

  16. Database architectures for Space Telescope Science Institute

    NASA Astrophysics Data System (ADS)

    Lubow, Stephen

    1993-08-01

    At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).

  17. Enhanced networked server management with random remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2003-08-01

    In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.

  18. Training and Maintaining System-Wide Reliability in Outcome Management.

    PubMed

    Barwick, Melanie A; Urajnik, Diana J; Moore, Julia E

    2014-01-01

    The Child and Adolescent Functional Assessment Scale (CAFAS) is widely used for outcome management, for providing real time client and program level data, and the monitoring of evidence-based practices. Methods of reliability training and the assessment of rater drift are critical for service decision-making within organizations and systems of care. We assessed two approaches for CAFAS training: external technical assistance and internal technical assistance. To this end, we sampled 315 practitioners trained by external technical assistance approach from 2,344 Ontario practitioners who had achieved reliability on the CAFAS. To assess the internal technical assistance approach as a reliable alternative training method, 140 practitioners trained internally were selected from the same pool of certified raters. Reliabilities were high for both practitioners trained by external technical assistance and internal technical assistance approaches (.909-.995, .915-.997, respectively). 1 and 3-year estimates showed some drift on several scales. High and consistent reliabilities over time and training method has implications for CAFAS training of behavioral health care practitioners, and the maintenance of CAFAS as a global outcome management tool in systems of care.

  19. Assessing Server Fault Tolerance and Disaster Recovery Implementation in Thin Client Architectures

    DTIC Science & Technology

    2007-09-01

    server • Windows 2003 server Processor AMD Geode GX Memory 512MB Flash/256MB DDR RAM I/O/Peripheral Support • VGA-type video output (DB-15...2000 Advanced Server Processor AMD Geode NX 1500 Memory • 256MB or 512MB or 1GB DDR SDRAM • 1GB or 512MB Flash I/O/Peripheral Support • SiS741 GX

  20. Accountable Information Flow for Java-Based Web Applications

    DTIC Science & Technology

    2010-01-01

    runtime library Swift server runtime Java servlet framework HTTP Web server Web browser Figure 2: The Swift architecture introduced an open-ended...On the server, the Java application code links against Swift’s server-side run-time library, which in turn sits on top of the standard Java servlet ...AFRL-RI-RS-TR-2010-9 Final Technical Report January 2010 ACCOUNTABLE INFORMATION FLOW FOR JAVA -BASED WEB APPLICATIONS

  1. Performance characteristics of a batch service queueing system with functioning server failure and multiple vacations

    NASA Astrophysics Data System (ADS)

    Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.

    2018-04-01

    This paper examines bulk arrival and batch service queueing system with functioning server failure and multiple vacations. Customers are arriving into the system in bulk according to Poisson process with rate λ. Arriving customers are served in batches with minimum of ‘a’ and maximum of ‘b’ number of customers according to general bulk service rule. In the service completion epoch if the queue length is less than ‘a’ then the server leaves for vacation (secondary job) of random length. After a vacation completion, if the queue length is still less than ‘a’ then the server leaves for another vacation. The server keeps on going vacation until the queue length reaches the value ‘a’. The server is not stable at all the times. Sometimes it may fails during functioning of customers. Though the server fails service process will not be interrupted.It will be continued for the current batch of customers with lower service rate than the regular service rate. The server will be repaired after the service completion with lower service rate. The probability generating function of the queue size at an arbitrary time epoch will be obtained for the modelled queueing system by using supplementary variable technique. Moreover various performance characteristics will also be derived with suitable numerical illustrations.

  2. RNAiFold: a web server for RNA inverse folding and molecular design.

    PubMed

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-07-01

    Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.

  3. Serving Satellite Remote Sensing Data to User Community through the OGC Interoperability Protocols

    NASA Astrophysics Data System (ADS)

    di, L.; Yang, W.; Bai, Y.

    2005-12-01

    Remote sensing is one of the major methods for collecting geospatial data. Hugh amount of remote sensing data has been collected by space agencies and private companies around the world. For example, NASA's Earth Observing System (EOS) is generating more than 3 Tb of remote sensing data per day. The data collected by EOS are processed, distributed, archived, and managed by the EOS Data and Information System (EOSDIS). Currently, EOSDIS is managing several petabytes of data. All of those data are not only valuable for global change research, but also useful for local and regional application and decision makings. How to make the data easily accessible to and usable by the user community is one of key issues for realizing the full potential of these valuable datasets. In the past several years, the Open Geospatial Consortium (OGC) has developed several interoperability protocols aiming at making geospatial data easily accessible to and usable by the user community through Internet. The protocols particularly relevant to the discovery, access, and integration of multi-source satellite remote sensing data are the Catalog Service for Web (CS/W) and Web Coverage Services (WCS) Specifications. The OGC CS/W specifies the interfaces, HTTP protocol bindings, and a framework for defining application profiles required to publish and access digital catalogues of metadata for geographic data, services, and related resource information. The OGC WCS specification defines the interfaces between web-based clients and servers for accessing on-line multi-dimensional, multi-temporal geospatial coverage in an interoperable way. Based on definitions by OGC and ISO 19123, coverage data include all remote sensing images as well as gridded model outputs. The Laboratory for Advanced Information Technology and Standards (LAITS), George Mason University, has been working on developing and implementing OGC specifications for better serving NASA Earth science data to the user community for many years. We have developed the NWGISS software package that implements multiple OGC specifications, including OGC WMS, WCS, CS/W, and WFS. As a part of NASA REASON GeoBrain project, the NWGISS WCS and CS/W servers have been extended to provide operational access to NASA EOS data at data pools through OGC protocols and to make both services chainable in the web-service chaining. The extensions in the WCS server include the implementation of WCS 1.0.0 and WCS 1.0.2, and the development of WSDL description of the WCS services. In order to find the on-line EOS data resources, the CS/W server is extended at the backend to search metadata in NASA ECHO. This presentation reports those extensions and discuss lessons-learned on the implementation. It also discusses the advantage, disadvantages, and future improvement of OGC specifications, particularly the WCS.

  4. Identifying Psychometric Properties of the Social-Emotional Learning Skills Scale

    ERIC Educational Resources Information Center

    Esen-Aygun, Hanife; Sahin-Taskin, Cigdem

    2017-01-01

    This study aims to develop a comprehensive scale of social-emotional learning. After constructing a wide range of item pool and expertise evaluation, validity and reliability studies were carried out through using the data-set of 439 primary school students at 3rd and 4th grade levels. Exploratory and confirmatory factor analysis results revealed…

  5. 76 FR 21889 - Nebraska Public Power District; Southwest Power Pool Regional Entity; Notice of Extension of Time

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-19

    ... Time On April 8, 2011, the Midwest Reliability Organization (MRO) filed a motion for an extension of time to file comments in connection with the March 18, 2011 Petition of Nebraska Public Power District... may oppose both petitions. Upon consideration, notice is hereby given that an extension of time for...

  6. Development of Teachers' Attitude Scale towards Science Fair

    ERIC Educational Resources Information Center

    Tortop, Hasan Said

    2013-01-01

    This study was conducted to develop a new scale for measuring teachers' attitude towards science fair. Teacher Attitude Scale towards Science Fair (TASSF) is an inventory made up of 19 items and five dimensions. The study included such stages as literature review, the preparation of the item pool and the reliability and validity analysis. First of…

  7. Diversity Leadership Skills of School Administrators: A Scale Development Study

    ERIC Educational Resources Information Center

    Polat, Soner; Arslan, Yaser; Ölçüm, Dinçer

    2017-01-01

    The aim of this study is to develop a valid and reliable instrument to determine the level of school administrators' diversity leadership based on teachers' perceptions. For this purpose, an item pool was created which includes 68 questions based on the literature, and data were obtained from 343 teachers. Exploratory factor analysis (EFA) was…

  8. Development and Validation of Perceptions of Online Interaction Scale

    ERIC Educational Resources Information Center

    Bagriacik Yilmaz, Ayse; Karatas, Serçin

    2018-01-01

    The aim of this study was to develop a measurement instrument which is compatible with literature, of which validity and reliability are proved with the aim of determining interaction perceived by learners in online learning environments. Accordingly, literature review was made, and outline form of the scale was formed with item pool by taking 14…

  9. A reliable method of analyzing dietaries of mycophagous small mammals

    Treesearch

    W. Colgan; A.B. Carey; James M. Trappe

    1997-01-01

    Two methods of analyzing the dietaries of mycophagous small mammals were compared. Fecal pellets were collected from 11 northern flying squirrels and 12 Townsend's chipmunks, all caught live. In 1 method, pellets from each individual were examined microscopically; in the other, samples from 3 or 4 individuals from each species were pooled and the number of slides...

  10. Red Hat Enterprise Virtualization - KVM-based infrastructure services at BNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortijo, D.

    2011-06-14

    Over the past 18 months, BNL has moved a large percentage of its Linux-based servers and services into a Red Hat Enterprise Virtualization (RHEV) environment. This presentation will address our approach to virtualization, critical decision points, and a discussion of our implementation. Specific topics will include an overview of hardware and software requirements, networking, and storage; discussion of the decision of Red Hat solution over competing products (VMWare, Xen, etc); details on some of the features of RHEV - both current and on their roadmap; Review of performance and reliability gains since deployment completion; path forward for RHEV at BNLmore » and caveats and potential problems.« less

  11. Development of Constellation's Launch Control System

    NASA Technical Reports Server (NTRS)

    Lougheed, Kirk D.; Peaden, Cary J.

    2010-01-01

    The paper focuses on the National Aeronautics and Space Administration (NASA) Constellation Program's Launch Control System (LCS) development effort at Kennedy Space Center (KSC). It provides a brief history of some preceding efforts to provide launch control and ground processing systems for other NASA programs, and some lessons learned from those experiences. It then provides high level descriptions of the LCS mission, objectives, organization, architecture, and progress. It discusses some of our development tenets, including our use of standards based design and use of off-the-shelf products whenever possible, incremental development cycles, and highly reliable, available, and supportable enterprise class system servers. It concludes with some new lessons learned and our plans for the future.

  12. Recent developments for the Large Binocular Telescope Guiding Control Subsystem

    NASA Astrophysics Data System (ADS)

    Golota, T.; De La Peña, M. D.; Biddick, C.; Lesser, M.; Leibold, T.; Miller, D.; Meeks, R.; Hahn, T.; Storm, J.; Sargent, T.; Summers, D.; Hill, J.; Kraus, J.; Hooper, S.; Fisher, D.

    2014-07-01

    The Large Binocular Telescope (LBT) has eight Acquisition, Guiding, and wavefront Sensing Units (AGw units). They provide guiding and wavefront sensing capability at eight different locations at both direct and bent Gregorian focal stations. Recent additions of focal stations for PEPSI and MODS instruments doubled the number of focal stations in use including respective motion, camera controller server computers, and software infrastructure communicating with Guiding Control Subsystem (GCS). This paper describes the improvements made to the LBT GCS and explains how these changes have led to better maintainability and contributed to increased reliability. This paper also discusses the current GCS status and reviews potential upgrades to further improve its performance.

  13. geoknife: Reproducible web-processing of large gridded datasets

    USGS Publications Warehouse

    Read, Jordan S.; Walker, Jordan I.; Appling, Alison P.; Blodgett, David L.; Read, Emily K.; Winslow, Luke A.

    2016-01-01

    Geoprocessing of large gridded data according to overlap with irregular landscape features is common to many large-scale ecological analyses. The geoknife R package was created to facilitate reproducible analyses of gridded datasets found on the U.S. Geological Survey Geo Data Portal web application or elsewhere, using a web-enabled workflow that eliminates the need to download and store large datasets that are reliably hosted on the Internet. The package provides access to several data subset and summarization algorithms that are available on remote web processing servers. Outputs from geoknife include spatial and temporal data subsets, spatially-averaged time series values filtered by user-specified areas of interest, and categorical coverage fractions for various land-use types.

  14. Interrater and Intrarater Reliability of the Balance Computerized Adaptive Test in Patients With Stroke.

    PubMed

    Chiang, Hsin-Yu; Lu, Wen-Shian; Yu, Wan-Hui; Hsueh, I-Ping; Hsieh, Ching-Lin

    2018-04-11

    To examine the interrater and intrarater reliability of the Balance Computerized Adaptive Test (Balance CAT) in patients with chronic stroke having a wide range of balance functions. Repeated assessments design (1wk apart). Seven teaching hospitals. A pooled sample (N=102) including 2 independent groups of outpatients (n=50 for the interrater reliability study; n=52 for the intrarater reliability study) with chronic stroke. Not applicable. Balance CAT. For the interrater reliability study, the values of intraclass correlation coefficient, minimal detectable change (MDC), and percentage of MDC (MDC%) for the Balance CAT were .84, 1.90, and 31.0%, respectively. For the intrarater reliability study, the values of intraclass correlation coefficient, MDC, and MDC% ranged from .89 to .91, from 1.14 to 1.26, and from 17.1% to 18.6%, respectively. The Balance CAT showed sufficient intrarater reliability in patients with chronic stroke having balance functions ranging from sitting with support to independent walking. Although the Balance CAT may have good interrater reliability, we found substantial random measurement error between different raters. Accordingly, if the Balance CAT is used as an outcome measure in clinical or research settings, same raters are suggested over different time points to ensure reliable assessments. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. ProBiS-2012: web server and web services for detection of structurally similar binding sites in proteins.

    PubMed

    Konc, Janez; Janezic, Dusanka

    2012-07-01

    The ProBiS web server is a web server for detection of structurally similar binding sites in the PDB and for local pairwise alignment of protein structures. In this article, we present a new version of the ProBiS web server that is 10 times faster than earlier versions, due to the efficient parallelization of the ProBiS algorithm, which now allows significantly faster comparison of a protein query against the PDB and reduces the calculation time for scanning the entire PDB from hours to minutes. It also features new web services, and an improved user interface. In addition, the new web server is united with the ProBiS-Database and thus provides instant access to pre-calculated protein similarity profiles for over 29 000 non-redundant protein structures. The ProBiS web server is particularly adept at detection of secondary binding sites in proteins. It is freely available at http://probis.cmm.ki.si/old-version, and the new ProBiS web server is at http://probis.cmm.ki.si.

  16. CDC WONDER: a cooperative processing architecture for public health.

    PubMed Central

    Friede, A; Rosen, D H; Reid, J A

    1994-01-01

    CDC WONDER is an information management architecture designed for public health. It provides access to information and communications without the user's needing to know the location of data or communication pathways and mechanisms. CDC WONDER users have access to extractions from some 40 databases; electronic mail (e-mail); and surveillance data processing. System components include the Remote Client, the Communications Server, the Queue Managers, and Data Servers and Process Servers. The Remote Client software resides in the user's machine; other components are at the Centers for Disease Control and Prevention (CDC). The Remote Client, the Communications Server, and the Applications Server provide access to the information and functions in the Data Servers and Process Servers. The system architecture is based on cooperative processing, and components are coupled via pure message passing, using several protocols. This architecture allows flexibility in the choice of hardware and software. One system limitation is that final results from some subsystems are obtained slowly. Although designed for public health, CDC WONDER could be useful for other disciplines that need flexible, integrated information exchange. PMID:7719813

  17. An Application Server for Scientific Collaboration

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Luetkemeyer, Kelly G.

    1998-11-01

    Tech-X Corporation has developed SciChat, an application server for scientific collaboration. Connections are made to the server through a Java client, that can either be an application or an applet served in a web page. Once connected, the client may choose to start or join a session. A session includes not only other clients, but also an application. Any client can send a command to the application. This command is executed on the server and echoed to all clients. The results of the command, whether numerical or graphical, are then distributed to all of the clients; thus, multiple clients can interact collaboratively with a single application. The client is developed in Java, the server in C++, and the middleware is the Common Object Request Broker Architecture. In this system, the Graphical User Interface processing is on the client machine, so one does not have the disadvantages of insufficient bandwidth as occurs when running X over the internet. Because the server, client, and middleware are object oriented, new types of servers and clients specialized to particular scientific applications are more easily developed.

  18. Web Service Distributed Management Framework for Autonomic Server Virtualization

    NASA Astrophysics Data System (ADS)

    Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea

    Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.

  19. Hybrid Rendering with Scheduling under Uncertainty

    PubMed Central

    Tamm, Georg; Krüger, Jens

    2014-01-01

    As scientific data of increasing size is generated by today’s simulations and measurements, utilizing dedicated server resources to process the visualization pipeline becomes necessary. In a purely server-based approach, requirements on the client-side are minimal as the client only displays results received from the server. However, the client may have a considerable amount of hardware available, which is left idle. Further, the visualization is put at the whim of possibly unreliable server and network conditions. Server load, bandwidth and latency may substantially affect the response time on the client. In this paper, we describe a hybrid method, where visualization workload is assigned to server and client. A capable client can produce images independently. The goal is to determine a workload schedule that enables a synergy between the two sides to provide rendering results to the user as fast as possible. The schedule is determined based on processing and transfer timings obtained at runtime. Our probabilistic scheduler adapts to changing conditions by shifting workload between server and client, and accounts for the performance variability in the dynamic system. PMID:25309115

  20. Web GIS in practice IV: publishing your health maps and connecting to remote WMS sources using the Open Source UMN MapServer and DM Solutions MapLab

    PubMed Central

    Boulos, Maged N Kamel; Honda, Kiyoshi

    2006-01-01

    Open Source Web GIS software systems have reached a stage of maturity, sophistication, robustness and stability, and usability and user friendliness rivalling that of commercial, proprietary GIS and Web GIS server products. The Open Source Web GIS community is also actively embracing OGC (Open Geospatial Consortium) standards, including WMS (Web Map Service). WMS enables the creation of Web maps that have layers coming from multiple different remote servers/sources. In this article we present one easy to implement Web GIS server solution that is based on the Open Source University of Minnesota (UMN) MapServer. By following the accompanying step-by-step tutorial instructions, interested readers running mainstream Microsoft® Windows machines and with no prior technical experience in Web GIS or Internet map servers will be able to publish their own health maps on the Web and add to those maps additional layers retrieved from remote WMS servers. The 'digital Asia' and 2004 Indian Ocean tsunami experiences in using free Open Source Web GIS software are also briefly described. PMID:16420699

  1. ProBiS-2012: web server and web services for detection of structurally similar binding sites in proteins

    PubMed Central

    Konc, Janez; Janežič, Dušanka

    2012-01-01

    The ProBiS web server is a web server for detection of structurally similar binding sites in the PDB and for local pairwise alignment of protein structures. In this article, we present a new version of the ProBiS web server that is 10 times faster than earlier versions, due to the efficient parallelization of the ProBiS algorithm, which now allows significantly faster comparison of a protein query against the PDB and reduces the calculation time for scanning the entire PDB from hours to minutes. It also features new web services, and an improved user interface. In addition, the new web server is united with the ProBiS-Database and thus provides instant access to pre-calculated protein similarity profiles for over 29 000 non-redundant protein structures. The ProBiS web server is particularly adept at detection of secondary binding sites in proteins. It is freely available at http://probis.cmm.ki.si/old-version, and the new ProBiS web server is at http://probis.cmm.ki.si. PMID:22600737

  2. R3D Align web server for global nucleotide to nucleotide alignments of RNA 3D structures.

    PubMed

    Rahrig, Ryan R; Petrov, Anton I; Leontis, Neocles B; Zirbel, Craig L

    2013-07-01

    The R3D Align web server provides online access to 'RNA 3D Align' (R3D Align), a method for producing accurate nucleotide-level structural alignments of RNA 3D structures. The web server provides a streamlined and intuitive interface, input data validation and output that is more extensive and easier to read and interpret than related servers. The R3D Align web server offers a unique Gallery of Featured Alignments, providing immediate access to pre-computed alignments of large RNA 3D structures, including all ribosomal RNAs, as well as guidance on effective use of the server and interpretation of the output. By accessing the non-redundant lists of RNA 3D structures provided by the Bowling Green State University RNA group, R3D Align connects users to structure files in the same equivalence class and the best-modeled representative structure from each group. The R3D Align web server is freely accessible at http://rna.bgsu.edu/r3dalign/.

  3. MicroSEQ® Salmonella spp. Detection Kit Using the Pathatrix® 10-Pooling Salmonella spp. Kit Linked Protocol Method Modification.

    PubMed

    Wall, Jason; Conrad, Rick; Latham, Kathy; Liu, Eric

    2014-03-01

    Real-time PCR methods for detecting foodborne pathogens offer the advantages of simplicity and quick time to results compared to traditional culture methods. The addition of a recirculating pooled immunomagnetic separation method prior to real-time PCR analysis increases processing output while reducing both cost and labor. This AOAC Research Institute method modification study validates the MicroSEQ® Salmonella spp. Detection Kit [AOAC Performance Tested Method (PTM) 031001] linked with the Pathatrix® 10-Pooling Salmonella spp. Kit (AOAC PTM 090203C) in diced tomatoes, chocolate, and deli ham. The Pathatrix 10-Pooling protocol represents a method modification of the enrichment portion of the MicroSEQ Salmonella spp. The results of the method modification were compared to standard cultural reference methods for diced tomatoes, chocolate, and deli ham. All three matrixes were analyzed in a paired study design. An additional set of chocolate test portions was analyzed using an alternative enrichment medium in an unpaired study design. For all matrixes tested, there were no statistically significant differences in the number of positive test portions detected by the modified candidate method compared to the appropriate reference method. The MicroSEQ Salmonella spp. protocol linked with the Pathatrix individual or 10-Pooling procedure demonstrated reliability as a rapid, simplified, method for the preparation of samples and subsequent detection of Salmonella in diced tomatoes, chocolate, and deli ham.

  4. Mitigating Security Issues: The University of Memphis Case.

    ERIC Educational Resources Information Center

    Jackson, Robert; Frolick, Mark N.

    2003-01-01

    Studied a server security breach at the University of Memphis, Tennessee, to highlight personnel roles, detection of the compromised server, policy enforcement, forensics, and the proactive search for other servers threatened in the same way. (SLD)

  5. Reactive Aggregate Model Protecting Against Real-Time Threats

    DTIC Science & Technology

    2014-09-01

    on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access

  6. Geospatial Authentication

    NASA Technical Reports Server (NTRS)

    Lyle, Stacey D.

    2009-01-01

    A software package that has been designed to allow authentication for determining if the rover(s) is/are within a set of boundaries or a specific area to access critical geospatial information by using GPS signal structures as a means to authenticate mobile devices into a network wirelessly and in real-time has been developed. The advantage lies in that the system only allows those with designated geospatial boundaries or areas into the server. The Geospatial Authentication software has two parts Server and Client. The server software is a virtual private network (VPN) developed in Linux operating system using Perl programming language. The server can be a stand-alone VPN server or can be combined with other applications and services. The client software is a GUI Windows CE software, or Mobile Graphical Software, that allows users to authenticate into a network. The purpose of the client software is to pass the needed satellite information to the server for authentication.

  7. The Development of a Remote Patient Monitoring System using Java-enabled Mobile Phones.

    PubMed

    Kogure, Y; Matsuoka, H; Kinouchi, Y; Akutagawa, M

    2005-01-01

    A remote patient monitoring system is described. This system is to monitor information of multiple patients in ICU/CCU via 3G mobile phones. Conventionally, various patient information, such as vital signs, is collected and stored on patient information systems. In proposed system, the patient information is recollected by remote information server, and transported to mobile phones. The server is worked as a gateway between hospital intranet and public networks. Provided information from the server consists of graphs and text data. Doctors can browse patient's information on their mobile phones via the server. A custom Java application software is used to browse these data. In this study, the information server and Java application are developed, and communication between the server and mobile phone in model environment is confirmed. To apply this system to practical products of patient information systems is future work.

  8. Precise Estimation of Allele Frequencies of Single-Nucleotide Polymorphisms by a Quantitative SSCP Analysis of Pooled DNA

    PubMed Central

    Sasaki, Tomonari; Tahira, Tomoko; Suzuki, Akari; Higasa, Koichiro; Kukita, Yoji; Baba, Shingo; Hayashi, Kenshi

    2001-01-01

    We show that single-nucleotide polymorphisms (SNPs) of moderate to high heterozygosity (minor allele frequencies >10%) can be efficiently detected, and their allele frequencies accurately estimated, by pooling the DNA samples and applying a capillary-based SSCP analysis. In this method, alleles are separated into peaks, and their frequencies can be reliably and accurately quantified from their peak heights (SD <1.8%). We found that as many as 40% of publicly available SNPs that were analyzed by this method have widely differing allele frequency distributions among groups of different ethnicity (parents of Centre d'Etude Polymorphisme Humaine families vs. Japanese individuals). These results demonstrate the effectiveness of the present pooling method in the reevaluation of candidate SNPs that have been collected by examination of limited numbers of individuals. The method should also serve as a robust quantitative technique for studies in which a precise estimate of SNP allele frequencies is essential—for example, in linkage disequilibrium analysis. PMID:11083945

  9. PEM public key certificate cache server

    NASA Astrophysics Data System (ADS)

    Cheung, T.

    1993-12-01

    Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.

  10. Comparison of approaches for mobile document image analysis using server supported smartphones

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  11. MODEL FOR INSTANTANEOUS RESIDENTIAL WATER DEMANDS

    EPA Science Inventory

    Residential wateer use is visualized as a customer-server interaction often encountered in queueing theory. Individual customers are assumed to arrive according to a nonhomogeneous Poisson process, then engage water servers for random lengths of time. Busy servers are assumed t...

  12. Report: Results of Technical Vulnerability Assessment: EPA’s Directory Service System Authentication and Authorization Servers

    EPA Pesticide Factsheets

    Report #11-P-0597, September 9, 2011. Vulnerability testing of EPA’s directory service system authentication and authorization servers conducted in March 2011 identified authentication and authorization servers with numerous vulnerabilities.

  13. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel andmore » one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a group is similar to all other components as a group. However, some differences were observed. The Supermicro server used 27 percent more power at idle compared to the other brands. The Intel server had a power supply control feature called cold redundancy, and the data suggest that cold redundancy can provide energy savings at low power levels. Test and evaluation methods that might be used by others having limited resources for IT equipment evaluation are explained in the report.« less

  14. Jenner-predict server: prediction of protein vaccine candidates (PVCs) in bacteria based on host-pathogen interactions

    PubMed Central

    2013-01-01

    Background Subunit vaccines based on recombinant proteins have been effective in preventing infectious diseases and are expected to meet the demands of future vaccine development. Computational approach, especially reverse vaccinology (RV) method has enormous potential for identification of protein vaccine candidates (PVCs) from a proteome. The existing protective antigen prediction software and web servers have low prediction accuracy leading to limited applications for vaccine development. Besides machine learning techniques, those software and web servers have considered only protein’s adhesin-likeliness as criterion for identification of PVCs. Several non-adhesin functional classes of proteins involved in host-pathogen interactions and pathogenesis are known to provide protection against bacterial infections. Therefore, knowledge of bacterial pathogenesis has potential to identify PVCs. Results A web server, Jenner-Predict, has been developed for prediction of PVCs from proteomes of bacterial pathogens. The web server targets host-pathogen interactions and pathogenesis by considering known functional domains from protein classes such as adhesin, virulence, invasin, porin, flagellin, colonization, toxin, choline-binding, penicillin-binding, transferring-binding, fibronectin-binding and solute-binding. It predicts non-cytosolic proteins containing above domains as PVCs. It also provides vaccine potential of PVCs in terms of their possible immunogenicity by comparing with experimentally known IEDB epitopes, absence of autoimmunity and conservation in different strains. Predicted PVCs are prioritized so that only few prospective PVCs could be validated experimentally. The performance of web server was evaluated against known protective antigens from diverse classes of bacteria reported in Protegen database and datasets used for VaxiJen server development. The web server efficiently predicted known vaccine candidates reported from Streptococcus pneumoniae and Escherichia coli proteomes. The Jenner-Predict server outperformed NERVE, Vaxign and VaxiJen methods. It has sensitivity of 0.774 and 0.711 for Protegen and VaxiJen dataset, respectively while specificity of 0.940 has been obtained for the latter dataset. Conclusions Better prediction accuracy of Jenner-Predict web server signifies that domains involved in host-pathogen interactions and pathogenesis are better criteria for prediction of PVCs. The web server has successfully predicted maximum known PVCs belonging to different functional classes. Jenner-Predict server is freely accessible at http://117.211.115.67/vaccine/home.html PMID:23815072

  15. CheD: chemical database compilation tool, Internet server, and client for SQL servers.

    PubMed

    Trepalin, S V; Yarkov, A V

    2001-01-01

    An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.

  16. Client - server programs analysis in the EPOCA environment

    NASA Astrophysics Data System (ADS)

    Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano

    1996-09-01

    Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.

  17. Automated grading of homework assignments and tests in introductory and intermediate statistics courses using active server pages.

    PubMed

    Stockburger, D W

    1999-05-01

    Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.

  18. An analytical approach to reduce between-plate variation in multiplex assays that measure antibodies to Plasmodium falciparum antigens.

    PubMed

    Fang, Rui; Wey, Andrew; Bobbili, Naveen K; Leke, Rose F G; Taylor, Diane Wallace; Chen, John J

    2017-07-17

    Antibodies play an important role in immunity to malaria. Recent studies show that antibodies to multiple antigens, as well as, the overall breadth of the response are associated with protection from malaria. Yet, the variability and reliability of antibody measurements against a combination of malarial antigens using multiplex assays have not been well characterized. A normalization procedure for reducing between-plate variation using replicates of pooled positive and negative controls was investigated. Sixty test samples (30 from malaria-positive and 30 malaria-negative individuals), together with five pooled positive-controls and two pooled negative-controls, were screened for antibody levels to 9 malarial antigens, including merozoite antigens (AMA1, EBA175, MSP1, MSP2, MSP3, MSP11, Pf41), sporozoite CSP, and pregnancy-associated VAR2CSA. The antibody levels were measured in triplicate on each of 3 plates, and the experiments were replicated on two different days by the same technician. The performance of the proposed normalization procedure was evaluated with the pooled controls for the test samples on both the linear and natural-log scales. Compared with data on the linear scale, the natural-log transformed data were less skewed and reduced the mean-variance relationship. The proposed normalization procedure using pooled controls on the natural-log scale significantly reduced between-plate variation. For malaria-related research that measure antibodies to multiple antigens with multiplex assays, the natural-log transformation is recommended for data analysis and use of the normalization procedure with multiple pooled controls can improve the precision of antibody measurements.

  19. Designing a Relational Database for the Basic School; Schools Command Web Enabled Officer and Enlisted Database (Sword)

    DTIC Science & Technology

    2002-06-01

    Student memo for personnel MCLLS . . . . . . . . . . . . . . 75 i. Migrate data to SQL Server...The Web Server is on the same server as the SWORD database in the current version. 4: results set 5: dynamic HTML page 6: dynamic HTML page 3: SQL ...still be supported by Access. SQL Server would be a more viable tool for a fully developed application based on the number of potential users and

  20. Understanding Customer Dissatisfaction with Underutilized Distributed File Servers

    NASA Technical Reports Server (NTRS)

    Riedel, Erik; Gibson, Garth

    1996-01-01

    An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.

  1. Regional Frequency and Uncertainty Analysis of Extreme Precipitation in Bangladesh

    NASA Astrophysics Data System (ADS)

    Mortuza, M. R.; Demissie, Y.; Li, H. Y.

    2014-12-01

    Increased frequency of extreme precipitations, especially those with multiday durations, are responsible for recent urban floods and associated significant losses of lives and infrastructures in Bangladesh. Reliable and routinely updated estimation of the frequency of occurrence of such extreme precipitation events are thus important for developing up-to-date hydraulic structures and stormwater drainage system that can effectively minimize future risk from similar events. In this study, we have updated the intensity-duration-frequency (IDF) curves for Bangladesh using daily precipitation data from 1961 to 2010 and quantified associated uncertainties. Regional frequency analysis based on L-moments is applied on 1-day, 2-day and 5-day annual maximum precipitation series due to its advantages over at-site estimation. The regional frequency approach pools the information from climatologically similar sites to make reliable estimates of quantiles given that the pooling group is homogeneous and of reasonable size. We have used Region of influence (ROI) approach along with homogeneity measure based on L-moments to identify the homogenous pooling groups for each site. Five 3-parameter distributions (i.e., Generalized Logistic, Generalized Extreme value, Generalized Normal, Pearson Type Three, and Generalized Pareto) are used for a thorough selection of appropriate models that fit the sample data. Uncertainties related to the selection of the distributions and historical data are quantified using the Bayesian Model Averaging and Balanced Bootstrap approaches respectively. The results from this study can be used to update the current design and management of hydraulic structures as well as in exploring spatio-temporal variations of extreme precipitation and associated risk.

  2. The Most Popular Astronomical Web Server in China

    NASA Astrophysics Data System (ADS)

    Cui, Chenzhou; Zhao, Yongheng

    Affected by the consistent depressibility of IT economy free homepage space is becoming less and less. It is more and more difficult to construct websites for amateur astronomers who do not have ability to pay for commercial space. In last May with the support of Chinese National Astronomical Observatory and Large Sky Area Multi-Object Fiber Spectroscopic Telescope project we setup a special web server (amateur.lamost.org) to provide free huge stable and no-advertisement homepage space to Chinese amateur astronomers and non-professional organizations. After only one year there has been more than 80 websites hosted on the server. More than 10000 visitors from nearly 40 countries visit the server and the amount of data downloaded by them exceeds 4 Giga-Bytes per day. The server has become the most popular amateur astronomical web server in China. It stores the most abundant Chinese amateur astronomical resources. Because of the extremely success our service has been drawing tremendous attentions from related institutions. Recently Chinese National Natural Science Foundation shows great interest to support the service. In the paper the emergence of the thought construction of the server and its present utilization and our future plan are introduced

  3. Designing a scalable video-on-demand server with data sharing

    NASA Astrophysics Data System (ADS)

    Lim, Hyeran; Du, David H.

    2000-12-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  4. Designing a scalable video-on-demand server with data sharing

    NASA Astrophysics Data System (ADS)

    Lim, Hyeran; Du, David H. C.

    2001-01-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  5. SPEER-SERVER: a web server for prediction of protein specificity determining sites

    PubMed Central

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J.; Panchenko, Anna R.; Chakrabarti, Saikat

    2012-01-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. PMID:22689646

  6. Study on an agricultural environment monitoring server system using Wireless Sensor Networks.

    PubMed

    Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun

    2010-01-01

    This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.

  7. Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco

    2014-05-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.

  8. Zebra: A striped network file system

    NASA Technical Reports Server (NTRS)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  9. Novel dynamic caching for hierarchically distributed video-on-demand systems

    NASA Astrophysics Data System (ADS)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  10. HSYMDOCK: a docking web server for predicting the structure of protein homo-oligomers with Cn or Dn symmetry.

    PubMed

    Yan, Yumeng; Tao, Huanyu; Huang, Sheng-You

    2018-05-26

    A major subclass of protein-protein interactions is formed by homo-oligomers with certain symmetry. Therefore, computational modeling of the symmetric protein complexes is important for understanding the molecular mechanism of related biological processes. Although several symmetric docking algorithms have been developed for Cn symmetry, few docking servers have been proposed for Dn symmetry. Here, we present HSYMDOCK, a web server of our hierarchical symmetric docking algorithm that supports both Cn and Dn symmetry. The HSYMDOCK server was extensively evaluated on three benchmarks of symmetric protein complexes, including the 20 CASP11-CAPRI30 homo-oligomer targets, the symmetric docking benchmark of 213 Cn targets and 35 Dn targets, and a nonredundant test set of 55 transmembrane proteins. It was shown that HSYMDOCK obtained a significantly better performance than other similar docking algorithms. The server supports both sequence and structure inputs for the monomer/subunit. Users have an option to provide the symmetry type of the complex, or the server can predict the symmetry type automatically. The docking process is fast and on average consumes 10∼20 min for a docking job. The HSYMDOCK web server is available at http://huanglab.phys.hust.edu.cn/hsymdock/.

  11. SPEER-SERVER: a web server for prediction of protein specificity determining sites.

    PubMed

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J; Panchenko, Anna R; Chakrabarti, Saikat

    2012-07-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids' Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/.

  12. DelPhiPKa web server: predicting pKa of proteins, RNAs and DNAs.

    PubMed

    Wang, Lin; Zhang, Min; Alexov, Emil

    2016-02-15

    A new pKa prediction web server is released, which implements DelPhi Gaussian dielectric function to calculate electrostatic potentials generated by charges of biomolecules. Topology parameters are extended to include atomic information of nucleotides of RNA and DNA, which extends the capability of pKa calculations beyond proteins. The web server allows the end-user to protonate the biomolecule at particular pH based on calculated pKa values and provides the downloadable file in PQR format. Several tests are performed to benchmark the accuracy and speed of the protocol. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhiPKa program. The computation is performed on the Palmetto supercomputer cluster and results/download links are given back to the end-user via http protocol. The web server takes advantage of MPI parallel implementation in DelPhiPKa and can run a single job on up to 24 CPUs. The DelPhiPKa web server is available at http://compbio.clemson.edu/pka_webserver. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. MCTBI: a web server for predicting metal ion effects in RNA structures.

    PubMed

    Sun, Li-Zhen; Zhang, Jing-Xiang; Chen, Shi-Jie

    2017-08-01

    Metal ions play critical roles in RNA structure and function. However, web servers and software packages for predicting ion effects in RNA structures are notably scarce. Furthermore, the existing web servers and software packages mainly neglect ion correlation and fluctuation effects, which are potentially important for RNAs. We here report a new web server, the MCTBI server (http://rna.physics.missouri.edu/MCTBI), for the prediction of ion effects for RNA structures. This server is based on the recently developed MCTBI, a model that can account for ion correlation and fluctuation effects for nucleic acid structures and can provide improved predictions for the effects of metal ions, especially for multivalent ions such as Mg 2+ effects, as shown by extensive theory-experiment test results. The MCTBI web server predicts metal ion binding fractions, the most probable bound ion distribution, the electrostatic free energy of the system, and the free energy components. The results provide mechanistic insights into the role of metal ions in RNA structure formation and folding stability, which is important for understanding RNA functions and the rational design of RNA structures. © 2017 Sun et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  14. Disk storage at CERN

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Cano, E.; Chan, B.; Espinal, X.; Fiorot, A.; González Labrador, H.; Iven, J.; Lamanna, M.; Lo Presti, G.; Mościcki, JT; Peters, AJ; Ponce, S.; Rousseau, H.; van der Ster, D.

    2015-12-01

    CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available on disk for users is about 100 PB (with relative ratios 1:20:120). EOS actively uses the two CERN Tier0 centres (Meyrin and Wigner) with 50:50 ratio. IT DSS also provide sizeable on-demand resources for IT services most notably OpenStack and NFS-based clients: this is provided by a Ceph infrastructure (3 PB) and few proprietary servers (NetApp). We will describe our operational experience and recent changes to these systems with special emphasis to the present usages for LHC data taking, the convergence to commodity hardware (nodes with 200-TB each with optional SSD) shared across all services. We also describe our experience in coupling commodity and home-grown solution (e.g. CERNBox integration in EOS, Ceph disk pools for AFS, CASTOR and NFS) and finally the future evolution of these systems for WLCG and beyond.

  15. An analysis of the number of parking bays and checkout counters for a supermarket using SAS simulation studio

    NASA Astrophysics Data System (ADS)

    Kar, Leow Soo

    2014-07-01

    Two important factors that influence customer satisfaction in large supermarkets or hypermarkets are adequate parking facilities and short waiting times at the checkout counters. This paper describes the simulation analysis of a large supermarket to determine the optimal levels of these two factors. SAS Simulation Studio is used to model a large supermarket in a shopping mall with car park facility. In order to make the simulation model more realistic, a number of complexities are introduced into the model. For example, arrival patterns of customers vary with the time of the day (morning, afternoon and evening) and with the day of the week (weekdays or weekends), the transport mode of arriving customers (by car or other means), the mode of payment (cash or credit card), customer shopping pattern (leisurely, normal, exact) or choice of checkout counters (normal or express). In this study, we focus on 2 important components of the simulation model, namely the parking area, the normal and express checkout counters. The parking area is modeled using a Resource Pool block where one resource unit represents one parking bay. A customer arriving by car seizes a unit of the resource from the Pool block (parks car) and only releases it when he exits the system. Cars arriving when the Resource Pool is empty (no more parking bays) leave without entering the system. The normal and express checkouts are represented by Server blocks with appropriate service time distributions. As a case study, a supermarket in a shopping mall with a limited number of parking bays in Bangsar was chosen for this research. Empirical data on arrival patterns, arrival modes, payment modes, shopping patterns, service times of the checkout counters were collected and analyzed to validate the model. Sensitivity analysis was also performed with different simulation scenarios to identify the parameters for the optimal number the parking spaces and checkout counters.

  16. Open Scenario Study, Phase II Report: Assessment and Development of Approaches for Satisfying Unclassified Scenario Needs

    DTIC Science & Technology

    2010-01-01

    interface, another providing the application logic (a program used to manipulate the data), and a server running Microsoft SQL Server or Oracle RDBMS... Oracle ) • Mysql (Open Source) • Other What application server software will be needed? • Application Server • CGI PHP/Perl (Open Source...are used throughout DoD and serve a variety of functions. While DoD has a codified and institutionalized process for the development of a common set

  17. Global ISR: Toward a Comprehensive Defense Against Unauthorized Code Execution

    DTIC Science & Technology

    2010-10-01

    implementation using two of the most popular open- source servers: the Apache web server, and the MySQL database server. For Apache, we measure the effect that...utility ab. T o ta l T im e ( s e c ) 0 500 1000 1500 2000 2500 3000 Native Null ISR ISR−MP Fig. 3. The MySQL test-insert bench- mark measures...various SQL operations. The figure draws total execution time as reported by the benchmark utility. Finally, we benchmarked a MySQL database server using

  18. The Network Configuration of an Object Relational Database Management System

    NASA Technical Reports Server (NTRS)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  19. Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…

  20. Development and Standardization of an Alienation Scale for Visually Impaired Students

    ERIC Educational Resources Information Center

    Punia, Poonam; Berwal, Sandeep

    2017-01-01

    Introduction: The present study was undertaken to develop a valid and reliable scale for measuring a feeling of alienation in students with visual impairments (that is, those who are blind or have low vision). Methods: In this study, a pool of 60 items was generated to develop an Alienation Scale for Visually Impaired Students (AL-VI) based on a…

  1. Teachers' Attitude towards ICT Use in Secondary Schools: A Scale Development Study

    ERIC Educational Resources Information Center

    Aydin, Mehmet Kemal; Semerci, Ali; Gürol, Mehmet

    2016-01-01

    The current study aims to develop a valid and reliable instrument that measures secondary school teachers' attitudes towards ICT use in teaching and learning process. A cross-sectional survey design was employed with a group of 173 teachers. Based on the literature review, a pool of 21 items was proposed and reviewed by a board of experts. As to…

  2. Thin client (web browser)-based collaboration for medical imaging and web-enabled data.

    PubMed

    Le, Tuong Huu; Malhi, Nadeem

    2002-01-01

    Utilizing thin client software and open source server technology, a collaborative architecture was implemented allowing for sharing of Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images with real-time markup. Using the Web browser as a thin client integrated with standards-based components, such as DHTML (dynamic hypertext markup language), JavaScript, and Java, collaboration was achieved through a Web server/proxy server combination utilizing Java Servlets and Java Server Pages. A typical collaborative session involved the driver, who directed the navigation of the other collaborators, the passengers, and provided collaborative markups of medical and nonmedical images. The majority of processing was performed on the server side, allowing for the client to remain thin and more accessible.

  3. Openlobby: an open game server for lobby and matchmaking

    NASA Astrophysics Data System (ADS)

    Zamzami, E. M.; Tarigan, J. T.; Jaya, I.; Hardi, S. M.

    2018-03-01

    Online Multiplayer is one of the most essential feature in modern games. However, while developing a multiplayer feature can be done with a simple computer networking programming, creating a balanced multiplayer session requires more player management components such as game lobby and matchmaking system. Our objective is to develop OpenLobby, a server that available to be used by other developers to support their multiplayer application. The proposed system acts as a lobby and matchmaker where queueing players will be matched to other player according to a certain criteria defined by developer. The solution provides an application programing interface that can be used by developer to interact with the server. For testing purpose, we developed a game that uses the server as their multiplayer server.

  4. Worldwide telemedicine services based on distributed multimedia electronic patient records by using the second generation Web server hyperwave.

    PubMed

    Quade, G; Novotny, J; Burde, B; May, F; Beck, L E; Goldschmidt, A

    1999-01-01

    A distributed multimedia electronic patient record (EPR) is a central component of a medicine-telematics application that supports physicians working in rural areas of South America, and offers medical services to scientists in Antarctica. A Hyperwave server is used to maintain the patient record. As opposed to common web servers--and as a second generation web server--Hyperwave provides the capability of holding documents in a distributed web space without the problem of broken links. This enables physicians to browse through a patient's record by using a standard browser even if the patient's record is distributed over several servers. The patient record is basically implemented on the "Good European Health Record" (GEHR) architecture.

  5. Improvements to the National Transport Code Collaboration Data Server

    NASA Astrophysics Data System (ADS)

    Alexander, David A.

    2001-10-01

    The data server of the National Transport Code Colaboration Project provides a universal network interface to interpolated or raw transport data accessible by a universal set of names. Data can be acquired from a local copy of the Iternational Multi-Tokamak (ITER) profile database as well as from TRANSP trees of MDS Plus data systems on the net. Data is provided to the user's network client via a CORBA interface, thus providing stateful data server instances, which have the advantage of remembering the desired interpolation, data set, etc. This paper will review the status and discuss the recent improvements made to the data server, such as the modularization of the data server and the addition of hdf5 and MDS Plus data file writing capability.

  6. SPACER: server for predicting allosteric communication and effects of regulation

    PubMed Central

    Goncearenco, Alexander; Mitternacht, Simon; Yong, Taipang; Eisenhaber, Birgit; Eisenhaber, Frank; Berezovsky, Igor N.

    2013-01-01

    The SPACER server provides an interactive framework for exploring allosteric communication in proteins with different sizes, degrees of oligomerization and function. SPACER uses recently developed theoretical concepts based on the thermodynamic view of allostery. It proposes easily tractable and meaningful measures that allow users to analyze the effect of ligand binding on the intrinsic protein dynamics. The server shows potential allosteric sites and allows users to explore communication between the regulatory and functional sites. It is possible to explore, for instance, potential effector binding sites in a given structure as targets for allosteric drugs. As input, the server only requires a single structure. The server is freely available at http://allostery.bii.a-star.edu.sg/. PMID:23737445

  7. Implementation of Sensor Twitter Feed Web Service Server and Client

    DTIC Science & Technology

    2016-12-01

    ARL-TN-0807 ● DEC 2016 US Army Research Laboratory Implementation of Sensor Twitter Feed Web Service Server and Client by...Implementation of Sensor Twitter Feed Web Service Server and Client by Bhagyashree V Kulkarni University of Maryland Michael H Lee Computational...

  8. Sandia Text ANaLysis Extensible librarY Server

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2006-05-11

    This is a server wrapper for STANLEY (Sandia Text ANaLysis Extensible librarY). STANLEY provides capabilities for analyzing, indexing and searching through text. STANLEY Server exposes this capability through a TCP/IP interface allowing third party applications and remote clients to access it.

  9. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  10. Improvements to Autoplot's HAPI Support

    NASA Astrophysics Data System (ADS)

    Faden, J.; Vandegriff, J. D.; Weigel, R. S.

    2017-12-01

    Autoplot handles data from a variety of data servers. These servers communicate data in different forms, each somewhat different in capabilities and each needing new software to interface. The Heliophysics Application Programmer's Interface (HAPI) attempts to ease this by providing a standard target for clients and servers to meet. Autoplot fully supports reading data from HAPI servers, and support continues to improve as the HAPI server spec matures. This collaboration has already produced robust clients and documentation which would be expensive for groups creating their own protocol. For example, client-side data caching is introduced where Autoplot maintains a cache of data for performance and off-line use. This is a feature we considered for previous data systems, but we could never afford the time to study and implement this carefully. Also, Autoplot itself can be used as a server, making the data it can read and the results of its processing available to other data systems. Autoplot use with other data transmission systems is reviewed as well, outlining features of each system.

  11. The EBI SRS server-new features.

    PubMed

    Zdobnov, Evgeny M; Lopez, Rodrigo; Apweiler, Rolf; Etzold, Thure

    2002-08-01

    Here we report on recent developments at the EBI SRS server (http://srs.ebi.ac.uk). SRS has become an integration system for both data retrieval and sequence analysis applications. The EBI SRS server is a primary gateway to major databases in the field of molecular biology produced and supported at EBI as well as European public access point to the MEDLINE database provided by US National Library of Medicine (NLM). It is a reference server for latest developments in data and application integration. The new additions include: concept of virtual databases, integration of XML databases like the Integrated Resource of Protein Domains and Functional Sites (InterPro), Gene Ontology (GO), MEDLINE, Metabolic pathways, etc., user friendly data representation in 'Nice views', SRSQuickSearch bookmarklets. SRS6 is a licensed product of LION Bioscience AG freely available for academics. The EBI SRS server (http://srs.ebi.ac.uk) is a free central resource for molecular biology data as well as a reference server for the latest developments in data integration.

  12. Bio-inspired diversity for increasing attacker workload

    NASA Astrophysics Data System (ADS)

    Kuhn, Stephen

    2014-05-01

    Much of the traffic in modern computer networks is conducted between clients and servers, rather than client-toclient. As a result, servers represent a high-value target for collection and analysis of network traffic. As they reside at a single network location (i.e. IP/MAC address) for long periods of time. Servers present a static target for surveillance, and a unique opportunity to observe the network traffic. Although servers present a heightened value for attackers, the security community as a whole has shifted more towards protecting clients in recent years leaving a gap in coverage. In addition, servers typically remain active on networks for years, potentially decades. This paper builds on previous work that demonstrated a proof of concept leveraging existing technology for increasing attacker workload. Here we present our clean slate approach to increasing attacker workload through a novel hypervisor and micro-kernel, utilizing next generation virtualization technology to create synthetic diversity of the server's presence including the hardware components.

  13. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    PubMed

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-08

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Research on Optical Transmitter and Receiver Module Used for High-Speed Interconnection between CPU and Memory

    NASA Astrophysics Data System (ADS)

    He, Huimin; Liu, Fengman; Li, Baoxia; Xue, Haiyun; Wang, Haidong; Qiu, Delong; Zhou, Yunyan; Cao, Liqiang

    2016-11-01

    With the development of the multicore processor, the bandwidth and capacity of the memory, rather than the memory area, are the key factors in server performance. At present, however, the new architectures, such as fully buffered DIMM (FBDIMM), hybrid memory cube (HMC), and high bandwidth memory (HBM), cannot be commercially applied in the server. Therefore, a new architecture for the server is proposed. CPU and memory are separated onto different boards, and optical interconnection is used for the communication between them. Each optical module corresponds to each dual inline memory module (DIMM) with 64 channels. Compared to the previous technology, not only can the architecture realize high-capacity and wide-bandwidth memory, it also can reduce power consumption and cost, and be compatible with the existing dynamic random access memory (DRAM). In this article, the proposed module with system-in-package (SiP) integration is demonstrated. In the optical module, the silicon photonic chip is included, which is a promising technology to be applied in the next-generation data exchanging centers. And due to the bandwidth-distance performance of the optical interconnection, SerDes chips are introduced to convert the 64-bit data at 800 Mbps from/to 4-channel data at 12.8 Gbps after/before they are transmitted though optical fiber. All the devices are packaged on cheap organic substrates. To ensure the performance of the whole system, several optimization efforts have been performed on the two modules. High-speed interconnection traces have been designed and simulated with electromagnetic simulation software. Steady-state thermal characteristics of the transceiver module have been evaluated by ANSYS APLD based on finite-element methodology (FEM). Heat sinks are placed at the hotspot area to ensure the reliability of all working chips. Finally, this transceiver system based on silicon photonics is measured, and the eye diagrams of data and clock signals are verified.

  15. Web catalog of oceanographic data using GeoNetwork

    NASA Astrophysics Data System (ADS)

    Marinova, Veselka; Stefanov, Asen

    2017-04-01

    Most of the data collected, analyzed and used by Bulgarian oceanographic data center (BgODC) from scientific cruises, argo floats, ferry boxes and real time operating systems are spatially oriented and need to be displayed on the map. The challenge is to make spatial information more accessible to users, decision makers and scientists. In order to meet this challenge, BgODC concentrate its efforts on improving dynamic and standardized access to their geospatial data as well as those from various related organizations and institutions. BgODC currently is implementing a project to create a geospatial portal for distributing metadata and search, exchange and harvesting spatial data. There are many open source software solutions able to create such spatial data infrastructure (SDI). Finally, the GeoNetwork open source is chosen, as it is already widespread. This software is free, effective and "cheap" solution for implementing SDI at organization level. It is platform independent and runs under many operating systems. Filling of the catalog goes through these practical steps: • Managing and storing data reliably within MS SQL spatial data base; • Registration of maps and data of various formats and sources in GeoServer (most popular open source geospatial server embedded with GeoNetwork) ; • Filling added meta data and publishing geospatial data at the desktop of GeoNetwork. GeoServer and GeoNetwork are based on Java so they require installing of a servlet engine like Tomcat. The experience gained from the use of GeoNetwork Open Source confirms that the catalog meets the requirements for data management and is flexible enough to customize. Building the catalog facilitates sustainable data exchange between end users. The catalog is a big step towards implementation of the INSPIRE directive due to availability of many features necessary for producing "INSPIRE compliant" metadata records. The catalog now contains all available GIS data provided by BgODC for Internet access. Searching data within the catalog is based upon geographic extent, theme type and free text search.

  16. The Live Access Server - A Web-Services Framework for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Schweitzer, R.; Hankin, S. C.; Callahan, J. S.; O'Brien, K.; Manke, A.; Wang, X. Y.

    2005-12-01

    The Live Access Server (LAS) is a general purpose Web-server for delivering services related to geo-science data sets. Data providers can use the LAS architecture to build custom Web interfaces to their scientific data. Users and client programs can then access the LAS site to search the provider's on-line data holdings, make plots of data, create sub-sets in a variety of formats, compare data sets and perform analysis on the data. The Live Access server software has continued to evolve by expanding the types of data (in-situ observations and curvilinear grids) it can serve and by taking advantages of advances in software infrastructure both in the earth sciences community (THREDDS, the GrADS Data Server, the Anagram framework and Java netCDF 2.2) and in the Web community (Java Servlet and the Apache Jakarta frameworks). This presentation will explore the continued evolution of the LAS architecture towards a complete Web-services-based framework. Additionally, we will discuss the redesign and modernization of some of the support tools available to LAS installers. Soon after the initial implementation, the LAS architecture was redesigned to separate the components that are responsible for the user interaction (the User Interface Server) from the components that are responsible for interacting with the data and producing the output requested by the user (the Product Server). During this redesign, we changed the implementation of the User Interface Server from CGI and JavaScript to the Java Servlet specification using Apache Jakarta Velocity backed by a database store for holding the user interface widget components. The User Interface server is now quite flexible and highly configurable because we modernized the components used for the implementation. Meanwhile, the implementation of the Product Server has remained a Perl CGI-based system. Clearly, the time has come to modernize this part of the LAS architecture. Before undertaking such a modernization it is important to understand what we hope to gain. Specifically we would like to make it even easier to add new output products into our core system based on the Ferret analysis and visualization package. By carefully factoring the tasks needed to create a product we will be able to create new products simply by adding a description of the product into the configuration and by writing the Ferret script needed to create the product. No code will need to be added to the Product Server to bring the new product on-line. The new architecture should be faster at extracting and processing configuration information needed to address each request. Finally, the new Product Server architecture should make it even easier to pass specialized configuration information to the Product Server to deal with unanticipated special data structures or processing requirements.

  17. The DICOM-based radiation therapy information system

    NASA Astrophysics Data System (ADS)

    Law, Maria Y. Y.; Chan, Lawrence W. C.; Zhang, Xiaoyan; Zhang, Jianguo

    2004-04-01

    Similar to DICOM for PACS (Picture Archiving and Communication System), standards for radiotherapy (RT) information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions), which are more than just images. This presentation describes how a DICOM-based RT Information System Server can be built based on the PACS technology and its data model for a web-based distribution. Methods: The RT information System consists of a Modality Simulator, a data format translator, a RT Gateway, the DICOM RT Server, and the Web-based Application Server. The DICOM RT Server was designed based on a PACS data model and was connected to a Web application Server for distribution of the RT information including therapeutic plans, structures, dose distribution, images and records. Various DICOM RT objects of the patient transmitted to the RT Server were routed to the Web Application Server where the contents of the DICOM RT objects were decoded and mapped to the corresponding location of the RT data model for display in the specially-designed Graphic User Interface. The non-DICOM objects were first rendered to DICOM RT Objects in the translator before they were sent to the RT Server. Results: Ten clinical cases have been collected from different hopsitals for evaluation of the DICOM-based RT Information System. They were successfully routed through the data flow and displayed in the client workstation of the RT information System. Conclusion: Using the DICOM-RT standards, integration of RT data from different vendors is possible.

  18. Interfaces for Distributed Systems of Information Servers.

    ERIC Educational Resources Information Center

    Kahle, Brewster; And Others

    1992-01-01

    Describes two systems--Wide Area Information Servers (WAIS) and Rosebud--that provide protocol-based mechanisms for accessing remote full-text information servers. Design constraints, human interface design, and implementation are examined for five interfaces to these systems developed to run on the Macintosh or Unix terminals. Sample screen…

  19. Passive Detection of Misbehaving Name Servers

    DTIC Science & Technology

    2013-10-01

    Passive Detection of Misbehaving Name Servers Leigh B. Metcalf Jonathan M. Spring October 2013 TECHNICAL REPORT CMU/SEI-2013-TR-010 ESC-TR...Detection of Misbehaving Name Servers 5. FUNDING NUMBERS FA8721-05-C-0003 6. AUTHOR(S) Leigh B. Metcalf and Jonathan M. Spring 7. PERFORMING

  20. Reliable protocol for shear wave elastography of lower limb muscles at rest and during passive stretching.

    PubMed

    Dubois, Guillaume; Kheireddine, Walid; Vergari, Claudio; Bonneau, Dominique; Thoreux, Patricia; Rouch, Philippe; Tanter, Mickael; Gennisson, Jean-Luc; Skalli, Wafa

    2015-09-01

    Development of shear wave elastography gave access to non-invasive muscle stiffness assessment in vivo. The aim of the present study was to define a measurement protocol to be used in clinical routine for quantifying the shear modulus of lower limb muscles. Four positions were defined to evaluate shear modulus in 10 healthy subjects: parallel to the fibers, in the anterior and posterior aspects of the lower limb, at rest and during passive stretching. Reliability was first evaluated on two muscles by three operators; these measurements were repeated six times. Then, measurement reliability was compared in 11 muscles by two operators; these measurements were repeated three times. Reproducibility of shear modulus was 0.48 kPa and repeatability was 0.41 kPa, with all muscles pooled. Position did not significantly influence reliability. Shear wave elastography appeared to be an appropriate and reliable tool to evaluate the shear modulus of lower limb muscles with the proposed protocol. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  1. Measurement and Reliability of Response Inhibition

    PubMed Central

    Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.

    2012-01-01

    Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308

  2. Development and validation of a paediatric long-bone fracture classification. A prospective multicentre study in 13 European paediatric trauma centres

    PubMed Central

    2011-01-01

    Background The aim of this study was to develop a child-specific classification system for long bone fractures and to examine its reliability and validity on the basis of a prospective multicentre study. Methods Using the sequentially developed classification system, three samples of between 30 and 185 paediatric limb fractures from a pool of 2308 fractures documented in two multicenter studies were analysed in a blinded fashion by eight orthopaedic surgeons, on a total of 5 occasions. Intra- and interobserver reliability and accuracy were calculated. Results The reliability improved with successive simplification of the classification. The final version resulted in an overall interobserver agreement of κ = 0.71 with no significant difference between experienced and less experienced raters. Conclusions In conclusion, the evaluation of the newly proposed classification system resulted in a reliable and routinely applicable system, for which training in its proper use may further improve the reliability. It can be recommended as a useful tool for clinical practice and offers the option for developing treatment recommendations and outcome predictions in the future. PMID:21548939

  3. Combining of different data pools for calculating a reliable POD for real defects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanzler, Daniel, E-mail: daniel.kanzler@bam.de, E-mail: christina.mueller@bam.de; Müller, Christina, E-mail: daniel.kanzler@bam.de, E-mail: christina.mueller@bam.de; Pitkänen, Jorma, E-mail: jorma.pitkanen@posiva.fi

    2015-03-31

    Real defects are essential for the evaluation of the reliability of non destructive testing (NDT) methods, especially in relation to the integrity of components. But in most of the cases the amount of available real defects is not sufficient to evaluate the system. Model-assisted and transfer functions are one way to handle that challenge. This study is focused on a combination of different data pools to create a sufficient amount of data for the reliability estimation. A widespread approach for calculating the Probability of Detection (POD) was used on a radiographic testing (RT) method. The highest contrast to noise ratiomore » (CNR) of each indication is usually selected as the signal in the 'â vs. a' (signal-response) approach for RT. By combining real and artificial defects (flat bottom holes, side drill holes, flat bottom squares, notches, etc) in RT the highest signals are close to each other, but the process of creating and evaluating real defects is much more complex. The solution is seen in the combination of real and artificial data using a weighted least square approach. The weights for real or artificial data were based on the importance, the value and the different detection behavior of the different data. For comparison, the alternative combination through the Bayesian Updating was also applied. As verification, a data pool with a large amount of real data was available. In an advanced approach for evaluating the digital RT data, the size of the indication (perpendicular to the X-ray beam) was introduced as additional information. The signal now consists of the CNR and the area of the indication. The detectability is changing depending on the area of the indication, a fact that was ignored in the previous POD calculations for RT. This points out that a weighted least square approach to pool the data might no longer be adequate. The Bayesian Updating of the estimated parameters of the relationship between the signal field (the area of the indication) and the geometry of the defects is seen as the appropriate model to combine the different defect types in a useful and meaningful way. This work was carried out together with the Finnish company for spent nuclear fuel and waste management - Posiva Oy. The digital RT is one of the NDT methods that might be used for the inspection of the weld of the copper canister to be used for the spent nuclear fuel in the Scandinavian concept of final disposal.« less

  4. Data Driven Device Failure Prediction

    DTIC Science & Technology

    2016-09-15

    Microsoft enterprise authentication service and Apache web server in an effort to increase up-time and improve mission effectiveness. These new fault loads...54 4.2.2 Web Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59...predictor. Finally, the implementation is validated by running the same experiment on a web server. 1.1 Problem Statement According to the operational

  5. Remote Patron Validation: Posting a Proxy Server at the Digital Doorway.

    ERIC Educational Resources Information Center

    Webster, Peter

    2002-01-01

    Discussion of remote access to library services focuses on proxy servers as a method for remote access, based on experiences at Saint Mary's University (Halifax). Topics include Internet protocol user validation; browser-directed proxies; server software proxies; vendor alternatives for validating remote users; and Internet security issues. (LRW)

  6. 75 FR 8400 - In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-24

    ... Communications System Server Software, Wireless Handheld Devices and Battery Packs; Notice of Investigation..., wireless handheld devices and battery packs by reason of infringement of certain claims of U.S. Patent Nos... certain wireless communications system server software, wireless handheld devices or battery packs that...

  7. The World-Wide Web and Mosaic: An Overview for Librarians.

    ERIC Educational Resources Information Center

    Morgan, Eric Lease

    1994-01-01

    Provides an overview of the Internet's World-Wide Web (Web), a hypertext system. Highlights include the client/server model; Uniform Resource Locator; examples of software; Web servers versus Gopher servers; HyperText Markup Language (HTML); converting files; Common Gateway Interface; organizing Web information; and the role of librarians in…

  8. Server Level Analysis of Network Operation Utilizing System Call Data

    DTIC Science & Technology

    2010-09-25

    Server DLL Inject 6 Executable Download and Execute 7 Execute Command 8 Execute net user /ADD 9 PassiveX ActiveX Inject Meterpreter Payload...10 PassiveX ActiveX Inject VNC Server Payload 11 PassiveX ActiveX Injection Payload 12 Recv Tag Findsock Meterpreter 13 Recv Tag Findsock

  9. Dynamic Web Pages: Performance Impact on Web Servers.

    ERIC Educational Resources Information Center

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  10. Creating affordable Internet map server applications for regional scale applications.

    PubMed

    Lembo, Arthur J; Wagenet, Linda P; Schusler, Tania; DeGloria, Stephen D

    2007-12-01

    This paper presents an overview and process for developing an Internet Map Server (IMS) application for a local volunteer watershed group using an Internal Internet Map Server (IIMS) strategy. The paper illustrates that modern GIS architectures utilizing an internal Internet map server coupled with a spatial SQL command language allow for rapid development of IMS applications. The implication of this approach means that powerful IMS applications can be rapidly and affordably developed for volunteer organizations that lack significant funds or a full time information technology staff.

  11. Online Job Allocation with Hard Allocation Ratio Requirement (Author’s Manuscript)

    DTIC Science & Technology

    2016-04-14

    where each job can only be served by a subset of servers. Such a problem exists in many emerging Internet services, such as YouTube , Netflix, etc. For...example, in the case of YouTube , each video is replicated only in a small number of servers, and each server can only serve a limited number of...streams simultaneously. When a user accesses YouTube and makes a request to watch a video, this request needs to be allocated to one of the servers that

  12. Secure Server Login by Using Third Party and Chaotic System

    NASA Astrophysics Data System (ADS)

    Abdulatif, Firas A.; zuhiar, Maan

    2018-05-01

    Server is popular among all companies and it used by most of them but due to the security threat on the server make this companies are concerned when using it so that in this paper we will design a secure system based on one time password and third parity authentication (smart phone). The proposed system make security to the login process of server by using one time password to authenticate person how have permission to login and third parity device (smart phone) as other level of security.

  13. Data warehousing with Oracle

    NASA Astrophysics Data System (ADS)

    Shahzad, Muhammad A.

    1999-02-01

    With the emergence of data warehousing, Decision support systems have evolved to its best. At the core of these warehousing systems lies a good database management system. Database server, used for data warehousing, is responsible for providing robust data management, scalability, high performance query processing and integration with other servers. Oracle being the initiator in warehousing servers, provides a wide range of features for facilitating data warehousing. This paper is designed to review the features of data warehousing - conceptualizing the concept of data warehousing and, lastly, features of Oracle servers for implementing a data warehouse.

  14. SQLGEN: a framework for rapid client-server database application development.

    PubMed

    Nadkarni, P M; Cheung, K H

    1995-12-01

    SQLGEN is a framework for rapid client-server relational database application development. It relies on an active data dictionary on the client machine that stores metadata on one or more database servers to which the client may be connected. The dictionary generates dynamic Structured Query Language (SQL) to perform common database operations; it also stores information about the access rights of the user at log-in time, which is used to partially self-configure the behavior of the client to disable inappropriate user actions. SQLGEN uses a microcomputer database as the client to store metadata in relational form, to transiently capture server data in tables, and to allow rapid application prototyping followed by porting to client-server mode with modest effort. SQLGEN is currently used in several production biomedical databases.

  15. CommServer: A Communications Manager For Remote Data Sites

    NASA Astrophysics Data System (ADS)

    Irving, K.; Kane, D. L.

    2012-12-01

    CommServer is a software system that manages making connections to remote data-gathering stations, providing a simple network interface to client applications. The client requests a connection to a site by name, and the server establishes the connection, providing a bidirectional channel between the client and the target site if successful. CommServer was developed to manage networks of FreeWave serial data radios with multiple data sites, repeaters, and network-accessed base stations, and has been in continuous operational use for several years. Support for Iridium modems using RUDICS will be added soon, and no changes to the application interface are anticipated. CommServer is implemented on Linux using programs written in bash shell, Python, Perl, AWK, under a set of conventions we refer to as ThinObject.

  16. ProBiS-ligands: a web server for prediction of ligands by examination of protein binding sites.

    PubMed

    Konc, Janez; Janežič, Dušanka

    2014-07-01

    The ProBiS-ligands web server predicts binding of ligands to a protein structure. Starting with a protein structure or binding site, ProBiS-ligands first identifies template proteins in the Protein Data Bank that share similar binding sites. Based on the superimpositions of the query protein and the similar binding sites found, the server then transposes the ligand structures from those sites to the query protein. Such ligand prediction supports many activities, e.g. drug repurposing. The ProBiS-ligands web server, an extension of the ProBiS web server, is open and free to all users at http://probis.cmm.ki.si/ligands. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Assessment of Risk Communication about Undercooked Hamburgers by Restaurant Servers.

    PubMed

    Thomas, Ellen M; Binder, Andrew R; McLAUGHLIN, Anne; Jaykus, Lee-Ann; Hanson, Dana; Powell, Douglas; Chapman, Benjamin

    2016-12-01

    According to the U.S. Food and Drug Administration 2013 Model Food Code, it is the duty of a food establishment to disclose and remind consumers of risk when ordering undercooked food such as ground beef. The purpose of this study was to explore actual risk communication behaviors of food establishment servers. Secret shoppers visited 265 restaurants in seven geographic locations across the United States, ordered medium rare burgers, and collected and coded risk information from chain and independent restaurant menus and from server responses. The majority of servers reported an unreliable method of doneness (77%) or other incorrect information (66%) related to burger doneness and safety. These results indicate major gaps in server knowledge and risk communication, and the current risk communication language in the Model Food Code does not sufficiently fill these gaps. The question is "should servers even be acting as risk communicators?" There are numerous challenges associated with this practice, including high turnover rates, limited education, and the high stress environment based on pleasing a customer. If servers are designated as risk communicators, food establishment staff should be adequately trained and provided with consumer advisory messages that are accurate, audience appropriate, and delivered in a professional manner so that customers can make informed food safety decisions.

  18. EnviroAtlas - Metrics for Austin, TX

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The layers in this web service depict ecosystem services at the census block group level for the community of Austin, Texas. These layers illustrate the ecosystems and natural resources that are associated with clean air (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_CleanAir/MapServer); clean and plentiful water (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_CleanPlentifulWater/MapServer); natural hazard mitigation (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_NaturalHazardMitigation/MapServer); climate stabilization (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_ClimateStabilization/MapServer); food, fuel, and materials (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_FoodFuelMaterials/MapServer); recreation, culture, and aesthetics (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_RecreationCultureAesthetics/MapServer); and biodiversity conservation (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_BiodiversityConservation/MapServer), and factors that place stress on those resources. EnviroAtlas allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the conterminous United States as well as de

  19. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    PubMed

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  20. A secure online image trading system for untrusted cloud environments.

    PubMed

    Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi

    2015-01-01

    In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.

  1. The CAD-score web server: contact area-based comparison of structures and interfaces of proteins, nucleic acids and their complexes.

    PubMed

    Olechnovič, Kliment; Venclovas, Ceslovas

    2014-07-01

    The Contact Area Difference score (CAD-score) web server provides a universal framework to compute and analyze discrepancies between different 3D structures of the same biological macromolecule or complex. The server accepts both single-subunit and multi-subunit structures and can handle all the major types of macromolecules (proteins, RNA, DNA and their complexes). It can perform numerical comparison of both structures and interfaces. In addition to entire structures and interfaces, the server can assess user-defined subsets. The CAD-score server performs both global and local numerical evaluations of structural differences between structures or interfaces. The results can be explored interactively using sortable tables of global scores, profiles of local errors, superimposed contact maps and 3D structure visualization. The web server could be used for tasks such as comparison of models with the native (reference) structure, comparison of X-ray structures of the same macromolecule obtained in different states (e.g. with and without a bound ligand), analysis of nuclear magnetic resonance (NMR) structural ensemble or structures obtained in the course of molecular dynamics simulation. The web server is freely accessible at: http://www.ibt.lt/bioinformatics/cad-score. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. 3Drefine: an interactive web server for efficient protein structure refinement

    PubMed Central

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371

  3. Identifying Genetic Signatures of Natural Selection Using Pooled Population Sequencing in Picea abies

    PubMed Central

    Chen, Jun; Källman, Thomas; Ma, Xiao-Fei; Zaina, Giusi; Morgante, Michele; Lascoux, Martin

    2016-01-01

    The joint inference of selection and past demography remain a costly and demanding task. We used next generation sequencing of two pools of 48 Norway spruce mother trees, one corresponding to the Fennoscandian domain, and the other to the Alpine domain, to assess nucleotide polymorphism at 88 nuclear genes. These genes are candidate genes for phenological traits, and most belong to the photoperiod pathway. Estimates of population genetic summary statistics from the pooled data are similar to previous estimates, suggesting that pooled sequencing is reliable. The nonsynonymous SNPs tended to have both lower frequency differences and lower FST values between the two domains than silent ones. These results suggest the presence of purifying selection. The divergence between the two domains based on synonymous changes was around 5 million yr, a time similar to a recent phylogenetic estimate of 6 million yr, but much larger than earlier estimates based on isozymes. Two approaches, one of them novel and that considers both FST and difference in allele frequencies between the two domains, were used to identify SNPs potentially under diversifying selection. SNPs from around 20 genes were detected, including genes previously identified as main target for selection, such as PaPRR3 and PaGI. PMID:27172202

  4. Identifying Genetic Signatures of Natural Selection Using Pooled Population Sequencing in Picea abies.

    PubMed

    Chen, Jun; Källman, Thomas; Ma, Xiao-Fei; Zaina, Giusi; Morgante, Michele; Lascoux, Martin

    2016-07-07

    The joint inference of selection and past demography remain a costly and demanding task. We used next generation sequencing of two pools of 48 Norway spruce mother trees, one corresponding to the Fennoscandian domain, and the other to the Alpine domain, to assess nucleotide polymorphism at 88 nuclear genes. These genes are candidate genes for phenological traits, and most belong to the photoperiod pathway. Estimates of population genetic summary statistics from the pooled data are similar to previous estimates, suggesting that pooled sequencing is reliable. The nonsynonymous SNPs tended to have both lower frequency differences and lower FST values between the two domains than silent ones. These results suggest the presence of purifying selection. The divergence between the two domains based on synonymous changes was around 5 million yr, a time similar to a recent phylogenetic estimate of 6 million yr, but much larger than earlier estimates based on isozymes. Two approaches, one of them novel and that considers both FST and difference in allele frequencies between the two domains, were used to identify SNPs potentially under diversifying selection. SNPs from around 20 genes were detected, including genes previously identified as main target for selection, such as PaPRR3 and PaGI. Copyright © 2016 Chen et al.

  5. Evaluation of the BD GeneOhm assay using the rotor-gene 6000 platform for rapid detection of methicillin-resistant Staphylococcus aureus from pooled screening swabs.

    PubMed

    Smith, Melvyn Howard; Hodgson, Julian; Eltringham, Ian Joseph

    2010-12-01

    As health services move toward universal methicillin-resistant Staphylococcus aureus (MRSA) screening for hospital admissions, the most cost-effective approach is yet to be defined. In this study, one of the largest to date, we evaluated the performance of the BD GeneOhm MRSA assay on the Rotor-Gene 6000 thermal cycler, using samples taken directly from pooled MRSA screens. Results were compared with the same assay performed on the Smart-Cycler II platform and overnight broth culture. Samples yielding discrepant results were subjected to detailed analysis with an in-house PCR and patient note review. A total of 1,428 pooled MRSA screens were tested. Sensitivities and specificities of 85.3% and 95.8% for the Rotor-Gene and 81% and 95.7% for the Smart-Cycler were obtained, compared with broth enrichment. The sensitivity of the BD GeneOhm assay was increased to 100% when the results of in-house PCR and patient note review were taken into account. This study demonstrates that the Rotor-Gene 6000 thermal cycler is a reliable platform for use with the BD GeneOhm assay. It also proves that commercial PCR can be performed direct on pooled samples in selective broth, without the need for overnight incubation.

  6. Evaluation of ZFS as an efficient WLCG storage backend

    NASA Astrophysics Data System (ADS)

    Ebert, M.; Washbrook, A.

    2017-10-01

    A ZFS based software raid system was tested for performance against a hardware raid system providing storage based on the traditional Linux file systems XFS and EXT4. These tests were done for a healthy raid array as well as for a degraded raid array and during the rebuild of a raid array. It was found that ZFS performs better in almost all test scenarios. In addition, distinct features of ZFS were tested for WLCG data storage use, like compression and higher raid levels with triple redundancy information. The long term reliability was observed after converting all production storage servers at the Edinburgh WLCG Tier-2 site to ZFS, resulting in about 1.2PB of ZFS based storage at this site.

  7. Introduction

    NASA Astrophysics Data System (ADS)

    Zhao, Ben; Garbacki, Paweł; Gkantsidis, Christos; Iamnitchi, Adriana; Voulgaris, Spyros

    After a decade of intensive investigation, peer-to-peer computing has established itself as an accepted research eld in the general area of distributed systems. Peer-to- peer computing can be seen as the democratization of computing over throwing traditional hierarchical designs favored in client-server systems largely brought about by last-mile network improvements which have made individual PCs rst-class citizens in the network community. Much of the early focus in peer-to-peer systems was on best-effort le sharing applications. In recent years, however, research has focused on peer-to-peer systems that provide operational properties and functionality similar to those shown by more traditional distributed systems. These properties include stronger consistency, reliability, and security guarantees suitable to supporting traditional applications such as databases.

  8. MPNACK: an optical switching scheme enabling the buffer-less reliable transmission

    NASA Astrophysics Data System (ADS)

    Yu, Xiaoshan; Gu, Huaxi; Wang, Kun; Xu, Meng; Guo, Yantao

    2016-01-01

    Optical data center networks are becoming an increasingly promising solution to solve the bottlenecks faced by electrical networks, such as low transmission bandwidth, high wiring complexity, and unaffordable power consumption. However, the optical circuit switching (OCS) network is not flexible enough to carry the traffic burst while the optical packet switching (OPS) network cannot solve the packet contention in an efficient way. To this end, an improved switching strategy named OPS with multi-hop Negative Acknowledgement (MPNACK) is proposed. This scheme uses a feedback mechanism, rather than the buffering structure, to handle the optical packet contention. The collided packet is treated as a NACK packet and sent back to the source server. When the sender receives this NACK packet, it knows a collision happens in the transmission path and a retransmission procedure is triggered. Overall, the OPS-NACK scheme enables a reliable transmission in the buffer-less optical network. Furthermore, with this scheme, the expensive and energy-hungry elements, optical or electrical buffers, can be removed from the optical interconnects, thus a more scalable and cost-efficient network can be constructed for cloud computing data centers.

  9. The research and application of multi-biometric acquisition embedded system

    NASA Astrophysics Data System (ADS)

    Deng, Shichao; Liu, Tiegen; Guo, Jingjing; Li, Xiuyan

    2009-11-01

    The identification technology based on multi-biometric can greatly improve the applicability, reliability and antifalsification. This paper presents a multi-biometric system bases on embedded system, which includes: three capture daughter boards are applied to obtain different biometric: one each for fingerprint, iris and vein of the back of hand; FPGA (Field Programmable Gate Array) is designed as coprocessor, which uses to configure three daughter boards on request and provides data path between DSP (digital signal processor) and daughter boards; DSP is the master processor and its functions include: control the biometric information acquisition, extracts feature as required and responsible for compare the results with the local database or data server through network communication. The advantages of this system were it can acquire three different biometric in real time, extracts complexity feature flexibly in different biometrics' raw data according to different purposes and arithmetic and network interface on the core-board will be the solution of big data scale. Because this embedded system has high stability, reliability, flexibility and fit for different data scale, it can satisfy the demand of multi-biometric recognition.

  10. Managing healthcare information using short message service (SMS) in wireless broadband networks

    NASA Astrophysics Data System (ADS)

    Documet, Jorge; Tsao, Sinchai; Documet, Luis; Liu, Brent J.; Zhou, Zheng; Joseph, Anika O.

    2007-03-01

    Due to the ubiquity of cell phones, SMS (Short Message Service) has become an ideal means to wirelessly manage a Healthcare environment and in particular PACS (Picture Archival and Communications System) data. SMS is a flexible and mobile method for real-time access and control of Healthcare information systems such as HIS (Hospital Information System) or PACS. Unlike conventional wireless access methods, SMS' mobility is not limited by the presence of a WiFi network or any other localized signal. It provides a simple, reliable yet flexible method to communicate with an information system. In addition, SMS services are widely available for low costs from cellular phone service providers and allows for more mobility than other services such as wireless internet. This paper aims to describe a use case of SMS as a means of remotely communicating with a PACS server. Remote access to a PACS server and its Query-Retrieve services allows for a more convenient, flexible and streamlined radiology workflow. Wireless access methods such as SMS will increase dedicated PACS workstation availability for more specialized DICOM (Digital Imaging and Communications in Medicine) workflow management. This implementation will address potential security, performance and cost issues of applying SMS as part of a healthcare information management system. This is in an effort to design a wireless communication system with optimal mobility and flexibility at minimum material and time costs.

  11. HotSpot Wizard 3.0: web server for automated design of mutations and smart libraries based on sequence input information.

    PubMed

    Sumbalova, Lenka; Stourac, Jan; Martinek, Tomas; Bednar, David; Damborsky, Jiri

    2018-05-23

    HotSpot Wizard is a web server used for the automated identification of hotspots in semi-rational protein design to give improved protein stability, catalytic activity, substrate specificity and enantioselectivity. Since there are three orders of magnitude fewer protein structures than sequences in bioinformatic databases, the major limitation to the usability of previous versions was the requirement for the protein structure to be a compulsory input for the calculation. HotSpot Wizard 3.0 now accepts the protein sequence as input data. The protein structure for the query sequence is obtained either from eight repositories of homology models or is modeled using Modeller and I-Tasser. The quality of the models is then evaluated using three quality assessment tools-WHAT_CHECK, PROCHECK and MolProbity. During follow-up analyses, the system automatically warns the users whenever they attempt to redesign poorly predicted parts of their homology models. The second main limitation of HotSpot Wizard's predictions is that it identifies suitable positions for mutagenesis, but does not provide any reliable advice on particular substitutions. A new module for the estimation of thermodynamic stabilities using the Rosetta and FoldX suites has been introduced which prevents destabilizing mutations among pre-selected variants entering experimental testing. HotSpot Wizard is freely available at http://loschmidt.chemi.muni.cz/hotspotwizard.

  12. Enterprise-scale image distribution with a Web PACS.

    PubMed

    Gropper, A; Doyle, S; Dreyer, K

    1998-08-01

    The integration of images with existing and new health care information systems poses a number of challenges in a multi-facility network: image distribution to clinicians; making DICOM image headers consistent across information systems; and integration of teleradiology into PACS. A novel, Web-based enterprise PACS architecture introduced at Massachusetts General Hospital provides a solution. Four AMICAS Web/Intranet Image Servers were installed as the default DICOM destination of 10 digital modalities. A fifth AMICAS receives teleradiology studies via the Internet. Each AMICAS includes: a Java-based interface to the IDXrad radiology information system (RIS), a DICOM autorouter to tape-library archives and to the Agfa PACS, a wavelet image compressor/decompressor that preserves compatibility with DICOM workstations, a Web server to distribute images throughout the enterprise, and an extensible interface which permits links between other HIS and AMICAS. Using wavelet compression and Internet standards as its native formats, AMICAS creates a bridge to the DICOM networks of remote imaging centers via the Internet. This teleradiology capability is integrated into the DICOM network and the PACS thereby eliminating the need for special teleradiology workstations. AMICAS has been installed at MGH since March of 1997. During that time, it has been a reliable component of the evolving digital image distribution system. As a result, the recently renovated neurosurgical ICU will be filmless and use only AMICAS workstations for mission-critical patient care.

  13. A site of communication among enterprises for supporting occupational health and safety management system.

    PubMed

    Velonakis, E; Mantas, J; Mavrikakis, I

    2006-01-01

    The occupational health and safety management constitutes a field of increasing interest. Institutions in cooperation with enterprises make synchronized efforts to initiate quality management systems to this field. Computer networks can offer such services via TCP/IP which is a reliable protocol for workflow management between enterprises and institutions. A design of such network is based on several factors in order to achieve defined criteria and connectivity with other networks. The network will be consisted of certain nodes responsible to inform executive persons on Occupational Health and Safety. A web database has been planned for inserting and searching documents, for answering and processing questionnaires. The submission of files to a server and the answers to questionnaires through the web help the experts to make corrections and improvements on their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files in purpose users could retrieve the files which need. The access is limited to authorized users and digital watermarks authenticate and protect digital objects. The Health and Safety Management System follows ISO 18001. The implementation of it, through the web site is an aim. The all application is developed and implemented on a pilot basis for the health services sector. It is all ready installed within a hospital, supporting health and safety management among different departments of the hospital and allowing communication through WEB with other hospitals.

  14. Development of a Self-Report Physical Function Instrument for Disability Assessment: Item Pool Construction and Factor Analysis

    PubMed Central

    McDonough, Christine M.; Jette, Alan M.; Ni, Pengsheng; Bogusz, Kara; Marfeo, Elizabeth E; Brandt, Diane E; Chan, Leighton; Meterko, Mark; Haley, Stephen M.; Rasch, Elizabeth K.

    2014-01-01

    Objectives To build a comprehensive item pool representing work-relevant physical functioning and to test the factor structure of the item pool. These developmental steps represent initial outcomes of a broader project to develop instruments for the assessment of function within the context of Social Security Administration (SSA) disability programs. Design Comprehensive literature review; gap analysis; item generation with expert panel input; stakeholder interviews; cognitive interviews; cross-sectional survey administration; and exploratory and confirmatory factor analyses to assess item pool structure. Setting In-person and semi-structured interviews; internet and telephone surveys. Participants A sample of 1,017 SSA claimants, and a normative sample of 999 adults from the US general population. Interventions Not Applicable. Main Outcome Measure Model fit statistics Results The final item pool consisted of 139 items. Within the claimant sample 58.7% were white; 31.8% were black; 46.6% were female; and the mean age was 49.7 years. Initial factor analyses revealed a 4-factor solution which included more items and allowed separate characterization of: 1) Changing and Maintaining Body Position, 2) Whole Body Mobility, 3) Upper Body Function and 4) Upper Extremity Fine Motor. The final 4-factor model included 91 items. Confirmatory factor analyses for the 4-factor models for the claimant and the normative samples demonstrated very good fit. Fit statistics for claimant and normative samples respectively were: Comparative Fit Index = 0.93 and 0.98; Tucker-Lewis Index = 0.92 and 0.98; Root Mean Square Error Approximation = 0.05 and 0.04. Conclusions The factor structure of the Physical Function item pool closely resembled the hypothesized content model. The four scales relevant to work activities offer promise for providing reliable information about claimant physical functioning relevant to work disability. PMID:23542402

  15. ALOS2-Indonesia REDD+ Experiment (AIREX): Soil Pool Carbon Application

    NASA Astrophysics Data System (ADS)

    Raimadoya, M.; Kristijono, A.; Sudiana, N.; Sumawinata, B.; Suwardi; Santoso, E.; Mahargo, D.; Sudarman, S.; Mattikainen, M.

    2015-04-01

    The bilateral REDD+ agreement between Indonesia and Norway [1] has scheduled that performance based result phase will be started in 2014. Therefore, a transparent and reliable Monitoring, Reporting and V erification (MRV) system for the following carbon pools: (1) biomass, (2) dead organic matter (DOM), and (3) soil, is required to be ready prior to the performance based phase. While the biomass pool could be acquired by space-borne radar (SAR) application i.e. SAR Interferometry (In-SAR) and Polarimetric SAR Interferometry (Pol-InSAR), the method for soil pool is still needed to be developed.A study was implemented in a test site located in the pulp plantation concession of Teluk Meranti Estate, Riau Andalan Pulp and Paper (RAPP), Pelalawan District, Riau Province, Indonesia. The study was intended to evaluate the possibility to estimate soil pool carbon with radar technology. For this purpose, a combination of spaceborne SAR (ALOS/PALSAR) and Ground Penetrating Radar (200 MHz IDS 200 MHz IDS GPR) were used in this exercise.The initial result this study provides a promising outcome for improved soil pool carbon estimation in tropical peat forest condition. The volume estimation of peat soil could be measured from the combination of spaceborne SAR and GPR. Based on this volume, total carbon content can be generated. However, the application of this approach has several limitation such as: (1) GPR survey can only be implemented during the dry season, (2) Rugged Terrain Antenna (RTA) type of GPR should be used for smooth GPR survey in the surface of peat soil which covered by DOM, and (3) the map of peat soil extent by spaceborne SAR need to be improved.

  16. Development of a self-report physical function instrument for disability assessment: item pool construction and factor analysis.

    PubMed

    McDonough, Christine M; Jette, Alan M; Ni, Pengsheng; Bogusz, Kara; Marfeo, Elizabeth E; Brandt, Diane E; Chan, Leighton; Meterko, Mark; Haley, Stephen M; Rasch, Elizabeth K

    2013-09-01

    To build a comprehensive item pool representing work-relevant physical functioning and to test the factor structure of the item pool. These developmental steps represent initial outcomes of a broader project to develop instruments for the assessment of function within the context of Social Security Administration (SSA) disability programs. Comprehensive literature review; gap analysis; item generation with expert panel input; stakeholder interviews; cognitive interviews; cross-sectional survey administration; and exploratory and confirmatory factor analyses to assess item pool structure. In-person and semistructured interviews and Internet and telephone surveys. Sample of SSA claimants (n=1017) and a normative sample of adults from the U.S. general population (n=999). Not applicable. Model fit statistics. The final item pool consisted of 139 items. Within the claimant sample, 58.7% were white; 31.8% were black; 46.6% were women; and the mean age was 49.7 years. Initial factor analyses revealed a 4-factor solution, which included more items and allowed separate characterization of: (1) changing and maintaining body position, (2) whole body mobility, (3) upper body function, and (4) upper extremity fine motor. The final 4-factor model included 91 items. Confirmatory factor analyses for the 4-factor models for the claimant and the normative samples demonstrated very good fit. Fit statistics for claimant and normative samples, respectively, were: Comparative Fit Index=.93 and .98; Tucker-Lewis Index=.92 and .98; and root mean square error approximation=.05 and .04. The factor structure of the physical function item pool closely resembled the hypothesized content model. The 4 scales relevant to work activities offer promise for providing reliable information about claimant physical functioning relevant to work disability. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. SciServer: An Online Collaborative Environment for Big Data in Research and Education

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr

    2017-01-01

    For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast cross-matching between very large astronomical datasets.Demos, documentation, and more information about all these resources can be found at www.sciserver.org.

  18. A Passive System Reliability Analysis for a Station Blackout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David

    2015-05-03

    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less

  19. Summative assessment of undergraduates' communication competence in challenging doctor-patient encounters. Evaluation of the Düsseldorf CoMeD-OSCE.

    PubMed

    Mortsiefer, Achim; Immecke, Janine; Rotthoff, Thomas; Karger, André; Schmelzer, Regine; Raski, Bianca; Schmitten, Jürgen In der; Altiner, Attila; Pentzek, Michael

    2014-06-01

    To evaluate the summative assessment (OSCE) of a communication training programme for dealing with challenging doctor-patient encounters in the 4th study year. Our OSCE consists of 4 stations (breaking bad news, guilt and shame, aggressive patients, shared decision making), using a 4-item global rating (GR) instrument. We calculated reliability coefficients for different levels, discriminability of single items and interrater reliability. Validity was estimated by gender differences and accordance between GR and a checklist. In a pooled sample of 456 students in 3 OSCEs over 3 terms, total reliability was α=0.64, reliability coefficients for single stations were >0.80, and discriminability in 3 of 4 stations was within the range of 0.4-0.7. Except for one station, interrater reliability was moderate to strong. Reliability on item level was poor and pointed to some problems with the use of the GR. The application of the GR on regular undergraduate medical education shows moderate reliability in need of improvement and some traits of validity. Ongoing development and evaluation is needed with particular regard to the training of the examiners. Our CoMeD-OSCE proved suitable for the summative assessment of communication skills in challenging doctor-patient encounters. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Potency of blue catfish, Ictalurus furcatus (individual vs pooled) sperm to fertilize stripped channel catfish, I. punctatus eggs on the production and performance of progeny

    USDA-ARS?s Scientific Manuscript database

    Channel x blue hybrid catfish is the desired genotype for US farm-raised catfish industry. Induced spawning of gravid channel catfish, followed by fertilization of stripped eggs with blue catfish sperm is the only reliable means to produce hybrid catfish embryos in hatcheries. Hybrid catfish fry p...

  1. Get the Word Out with List Servers

    ERIC Educational Resources Information Center

    Goldberg, Laurence

    2006-01-01

    In this article, the author details the use of electronic mail server in their school. In their school district of about 7,300 students in suburban Philadelphia (Abington SD), electronic mail list servers are now being used, along with other methods of communication, to disseminate information quickly and widely. They began by manually maintaining…

  2. Using Web Server Logs to Track Users through the Electronic Forest

    ERIC Educational Resources Information Center

    Coombs, Karen A.

    2005-01-01

    This article analyzes server logs, providing helpful information in making decisions about Web-based services. The author indicates, as a result of analyzing server logs, several interesting things about the users' behavior were learned. The resulting findings are discussed in this article. Certain pages of the author's Web site, for instance, are…

  3. Selection of Server-Side Technologies for an E-Business Curriculum

    ERIC Educational Resources Information Center

    Sandvig, J. Christopher

    2007-01-01

    The rapid growth of e-business and e-commerce has made server-side programming an increasingly important topic in information systems (IS) and computer science (CS) curricula. This article presents an overview of the major features of several popular server-side programming technologies and discusses the factors that influence the selection of…

  4. CD-ROM Network Configurations: Good, Better, Best!

    ERIC Educational Resources Information Center

    McClanahan, Gloria

    1996-01-01

    Rates three methods of arranging CD-ROM school networks: (1) peer-to-peer; (2) daisy chain configurations; and (3) dedicated CD-ROM file server. Describes the following network components: the file server, network adapters and wiring, the CD-ROM file server, and CD-ROM drives. Discusses issues involved in assembling these components into a working…

  5. Meteosat: Full Disk - NOAA GOES Geostationary Satellite Server

    Science.gov Websites

    » DOC » NOAA » NESDIS » OSPO NOAA GOES Geostationary Satellite Server NOAA GOES Geostationary Satellite Server Click to Search GENERAL Home Channel Overview Site Disclaimer Enhancement Info FULL DISK by Europe's Meteorological Satellite Organization (EUMETSAT) and brought to you by the National

  6. Improving Internet Archive Service through Proxy Cache.

    ERIC Educational Resources Information Center

    Yu, Hsiang-Fu; Chen, Yi-Ming; Wang, Shih-Yong; Tseng, Li-Ming

    2003-01-01

    Discusses file transfer protocol (FTP) servers for downloading archives (files with particular file extensions), and the change to HTTP (Hypertext transfer protocol) with increased Web use. Topics include the Archie server; proxy cache servers; and how to improve the hit rate of archives by a combination of caching and better searching mechanisms.…

  7. Think They're Drunk? Alcohol Servers and the Identification of Intoxication.

    ERIC Educational Resources Information Center

    Burns, Edward D.; Nusbaumer, Michael R.; Reiling, Denise M.

    2003-01-01

    Examines practices used by servers to assess intoxication. The analysis was based upon questionnaires mailed to a random probability sample of licensed servers from one state (N = 822). Indicators found to be most important were examined in relation to a variety of occupational characteristics. Implications for training curricula, policy…

  8. From Server to Desktop: Capital and Institutional Planning for Client/Server Technology.

    ERIC Educational Resources Information Center

    Mullig, Richard M.; Frey, Keith W.

    1994-01-01

    Beginning with a request for an enhanced system for decision/strategic planning support, the University of Chicago's biological sciences division has developed a range of administrative client/server tools, instituted a capital replacement plan for desktop technology, and created a planning and staffing approach enabling rapid introduction of new…

  9. Visits, Hits, Caching and Counting on the World Wide Web: Old Wine in New Bottles?

    ERIC Educational Resources Information Center

    Berthon, Pierre; Pitt, Leyland; Prendergast, Gerard

    1997-01-01

    Although web browser caching speeds up retrieval, reduces network traffic, and decreases the load on servers and browser's computers, an unintended consequence for marketing research is that Web servers undercount hits. This article explores counting problems, caching, proxy servers, trawler software and presents a series of correction factors…

  10. Evaluation of a Local Designated Driver and Responsible Server Program to Prevent Drinking and Driving.

    ERIC Educational Resources Information Center

    Simons-Morton, Bruce G.; Cummings, Sharon Snider

    1997-01-01

    Evaluates the impact of beverage servers' interventions at five establishments participating in the Houston Techniques for Effective Alcohol Management (TEAM) program. The intervention included server training, a designated-driver program, and "Safe Ride Home" taxi vouchers. Findings are discussed within the context of scant public and…

  11. Server-Side Includes Made Simple.

    ERIC Educational Resources Information Center

    Fagan, Jody Condit

    2002-01-01

    Describes server-side include (SSI) codes which allow Webmasters to insert content into Web pages without programming knowledge. Explains how to enable the codes on a Web server, provides a step-by-step process for implementing them, discusses tags and syntax errors, and includes examples of their use on the Web site for Southern Illinois…

  12. Research of GIS-services applicability for solution of spatial analysis tasks.

    NASA Astrophysics Data System (ADS)

    Terekhin, D. A.; Botygin, I. A.; Sherstneva, A. I.; Sherstnev, V. S.

    2017-01-01

    Experiments for working out the areas of applying various gis-services in the tasks of spatial analysis are discussed in this paper. Google Maps, Yandex Maps, Microsoft SQL Server are used as services of spatial analysis. All services have shown a comparable speed of analyzing the spatial data when carrying out elemental spatial requests (building up the buffer zone of a point object) as well as the preferences of Microsoft SQL Server in operating with more complicated spatial requests. When building up elemental spatial requests, internet-services show higher efficiency due to cliental data handling with JavaScript-subprograms. A weak point of public internet-services is an impossibility to handle data on a server side and a barren variety of spatial analysis functions. Microsoft SQL Server offers a large variety of functions needed for spatial analysis on the server side. The authors conclude that when solving practical problems, the capabilities of internet-services used in building up routes and completing other functions with spatial analysis with Microsoft SQL Server should be involved.

  13. Heat transfer, fluid flow and mass transfer in laser welding of stainless steel with small length scale

    NASA Astrophysics Data System (ADS)

    He, Xiuli

    Nd: YAG Laser welding with hundreds of micrometers in laser beam diameter is widely used for assembly and closure of high reliability electrical and electronic packages for the telecommunications, aerospace and medical industries. However, certain concerns have to be addressed to obtain defect-free and structurally sound welds. During laser welding, Because of the high power density used, the pressures at the weld pool surface can be greater than the ambient pressure. This excess pressure provides a driving force for the vaporization to take place. As a result of vaporization for different elements, the composition in the weld pool may differ from that of base metal, which can result in changes in the microstructure and degradation of mechanical properties of weldments. When the weld pool temperatures are very high, the escaping vapor exerts a large recoil force on the weld pool surface, and as a consequence, tiny liquid metal particles may be expelled from the weld pool. Vaporization of alloying elements and liquid metal expulsion are the two main mechanisms of material loss. Besides, for laser welds with small length scale, heat transfer and fluid flow are different from those for arc welds with much larger length scale. Because of small weld pool size, rapid changes of temperature and very short duration of the laser welding process, physical measurements of important parameters such as temperature and velocity fields, weld thermal cycles, solidification and cooling rates are very difficult. The objective of the research is to quantitatively understand the influences of various factors on the heat transfer, fluid flow, vaporization of alloying elements and liquid metal expulsion in Nd:YAG laser welding with small length scale of 304 stainless steel. In this study, a comprehensive three dimensional heat transfer and fluid flow model based on the mass, momentum and energy conservation equations is relied upon to calculate temperature and velocity fields in the weld pool, weld thermal cycle, weld pool geometry and solidification parameters. Surface tension and buoyancy forces were considered for the calculation of transient weld pool convection. Very fine grids and small time steps were used to achieve accuracy in the calculations. The calculated weld pool dimensions were compared with the corresponding measured values to validate the model. (Abstract shortened by UMI.)

  14. [The database server for the medical bibliography database at Charles University].

    PubMed

    Vejvalka, J; Rojíková, V; Ulrych, O; Vorísek, M

    1998-01-01

    In the medical community, bibliographic databases are widely accepted as a most important source of information both for theoretical and clinical disciplines. To improve access to medical bibliographic databases at Charles University, a database server (ERL by Silver Platter) was set up at the 2nd Faculty of Medicine in Prague. The server, accessible by Internet 24 hours/7 days, hosts now 14 years' MEDLINE and 10 years' EMBASE Paediatrics. Two different strategies are available for connecting to the server: a specialized client program that communicates over the Internet (suitable for professional searching) and a web-based access that requires no specialized software (except the WWW browser) on the client side. The server is now offered to academic community to host further databases, possibly subscribed by consortia whose individual members would not subscribe them by themselves.

  15. Ecoupling server: A tool to compute and analyze electronic couplings.

    PubMed

    Cabeza de Vaca, Israel; Acebes, Sandra; Guallar, Victor

    2016-07-05

    Electron transfer processes are often studied through the evaluation and analysis of the electronic coupling (EC). Since most standard QM codes do not provide readily such a measure, additional, and user-friendly tools to compute and analyze electronic coupling from external wave functions will be of high value. The first server to provide a friendly interface for evaluation and analysis of electronic couplings under two different approximations (FDC and GMH) is presented in this communication. Ecoupling server accepts inputs from common QM and QM/MM software and provides useful plots to understand and analyze the results easily. The web server has been implemented in CGI-python using Apache and it is accessible at http://ecouplingserver.bsc.es. Ecoupling server is free and open to all users without login. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Interception and modification of network authentication packets with the purpose of allowing alternative authentication modes

    DOEpatents

    Kent, Alexander Dale [Los Alamos, NM

    2008-09-02

    Methods and systems in a data/computer network for authenticating identifying data transmitted from a client to a server through use of a gateway interface system which are communicately coupled to each other are disclosed. An authentication packet transmitted from a client to a server of the data network is intercepted by the interface, wherein the authentication packet is encrypted with a one-time password for transmission from the client to the server. The one-time password associated with the authentication packet can be verified utilizing a one-time password token system. The authentication packet can then be modified for acceptance by the server, wherein the response packet generated by the server is thereafter intercepted, verified and modified for transmission back to the client in a similar but reverse process.

  17. WEB-server for search of a periodicity in amino acid and nucleotide sequences

    NASA Astrophysics Data System (ADS)

    E Frenkel, F.; Skryabin, K. G.; Korotkov, E. V.

    2017-12-01

    A new web server (http://victoria.biengi.ac.ru/splinter/login.php) was designed and developed to search for periodicity in nucleotide and amino acid sequences. The web server operation is based upon a new mathematical method of searching for multiple alignments, which is founded on the position weight matrices optimization, as well as on implementation of the two-dimensional dynamic programming. This approach allows the construction of multiple alignments of the indistinctly similar amino acid and nucleotide sequences that accumulated more than 1.5 substitutions per a single amino acid or a nucleotide without performing the sequences paired comparisons. The article examines the principles of the web server operation and two examples of studying amino acid and nucleotide sequences, as well as information that could be obtained using the web server.

  18. An Array Library for Microsoft SQL Server with Astrophysical Applications

    NASA Astrophysics Data System (ADS)

    Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.

    2012-09-01

    Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory project will use it to store galaxy simulation data.

  19. A self-configuring control system for storage and computing departments at INFN-CNAF Tierl

    NASA Astrophysics Data System (ADS)

    Gregori, Daniele; Dal Pra, Stefano; Ricci, Pier Paolo; Pezzi, Michele; Prosperini, Andrea; Sapunenko, Vladimir

    2015-05-01

    The storage and farming departments at the INFN-CNAF Tier1[1] manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers between LHC Tiers sites (Tier0, Tier1 and Tier2) via GridFTP cluster and Xrootd protocol and finally the writing and reading data operations on magnetic tape backend. One of the most important and essential point in order to get a reliable service is a control system that can warn if problems arise and which is able to perform automatic recovery operations in case of service interruptions or major failures. Moreover, during daily operations the configurations can change, i.e. if the GPFS cluster nodes roles can be modified and therefore the obsolete nodes must be removed from the control system production, and the new servers should be added to the ones that are already present. The manual management of all these changes is an operation that can be somewhat difficult in case of several changes, it can also take a long time and is easily subject to human error or misconfiguration. For these reasons we have developed a control system with the feature of self-configure itself if any change occurs. Currently, this system has been in production for about a year at the INFN-CNAF Tier1 with good results and hardly any major drawback. There are three major key points in this system. The first is a software configurator service (e.g. Quattor or Puppet) for the servers machines that we want to monitor with the control system; this service must ensure the presence of appropriate sensors and custom scripts on the nodes to check and should be able to install and update software packages on them. The second key element is a database containing information, according to a suitable format, on all the machines in production and able to provide for each of them the principal information such as the type of hardware, the network switch to which the machine is connected, if the machine is real (physical) or virtual, the possible hypervisor to which it belongs and so on. The last key point is a control system software (in our implementation we choose the Nagios software), capable of assessing the status of the servers and services, and that can attempt to restore the working state, restart or inhibit software services and send suitable alarm messages to the site administrators. The integration of these three elements was made by appropriate scripts and custom implementation that allow the self-configuration of the system according to a decisional logic and the whole combination of all the above-mentioned components will be deeply discussed in this paper.

  20. CISN Display - Reliable Delivery of Real-time Earthquake Information, Including Rapid Notification and ShakeMap to Critical End Users

    NASA Astrophysics Data System (ADS)

    Rico, H.; Hauksson, E.; Thomas, E.; Friberg, P.; Given, D.

    2002-12-01

    The California Integrated Seismic Network (CISN) Display is part of a Web-enabled earthquake notification system alerting users in near real-time of seismicity, and also valuable geophysical information following a large earthquake. It will replace the Caltech/USGS Broadcast of Earthquakes (CUBE) and Rapid Earthquake Data Integration (REDI) Display as the principal means of delivering graphical earthquake information to users at emergency operations centers, and other organizations. Features distinguishing the CISN Display from other GUI tools are a state-full client/server relationship, a scalable message format supporting automated hyperlink creation, and a configurable platform-independent client with a GIS mapping tool; supporting the decision-making activities of critical users. The CISN Display is the front-end of a client/server architecture known as the QuakeWatch system. It is comprised of the CISN Display (and other potential clients), message queues, server, server "feeder" modules, and messaging middleware, schema and generators. It is written in Java, making it platform-independent, and offering the latest in Internet technologies. QuakeWatch's object-oriented design allows components to be easily upgraded through a well-defined set of application programming interfaces (APIs). Central to the CISN Display's role as a gateway to other earthquake products is its comprehensive XML-schema. The message model starts with the CUBE message format, but extends it by provisioning additional attributes for currently available products, and those yet to be considered. The supporting metadata in the XML-message provides the data necessary for the client to create a hyperlink and associate it with a unique event ID. Earthquake products deliverable to the CISN Display are ShakeMap, Ground Displacement, Focal Mechanisms, Rapid Notifications, OES Reports, and Earthquake Commentaries. Leveraging the power of the XML-format, the CISN Display provides prompt access to earthquake information on the Web. The links are automatically created when product generators deliver CUBE formatted packets to a Quake Data Distribution System (QDDS) hub (new distribution methods may be used later). The "feeder" modules tap into the QDDS hub and convert the packets into XML-messages. These messages are forwarded to message queues, and then distributed to clients where URLs are dynamically created for these products and linked to events on the CISN Display map. The products may be downloaded out-of-band; and with the inclusion of a GIS mapping tool users can plot organizational assets on the CISN Display map and overlay them against key spectral data, such as ground accelerations. This gives Emergency Response Managers information useful in allocating limited personnel and resources after a major event. At the heart of the system's robustness is a well-established and reliable set of communication protocols for best-effort delivery of data. For critical users a Common Object Request Broker Architecture (CORBA) state-full connection is used via a dedicated signaling channel. The system employs several CORBA methods that alert users of changes in the link status. Loss of connectivity triggers a strategy that attempts to reconnect through various physical and logical paths. Thus, by building on past application successes and proven Internet advances the CISN Display targets a specific audience by providing enhancements previously not available from other applications.

  1. Studying the co-evolution of protein families with the Mirrortree web server.

    PubMed

    Ochoa, David; Pazos, Florencio

    2010-05-15

    The Mirrortree server allows to graphically and interactively study the co-evolution of two protein families, and investigate their possible interactions and functional relationships in a taxonomic context. The server includes the possibility of starting from single sequences and hence it can be used by non-expert users. The web server is freely available at http://csbg.cnb.csic.es/mtserver. It was tested in the main web browsers. Adobe Flash Player is required at the client side to perform the interactive assessment of co-evolution. pazos@cnb.csic.es Supplementary data are available at Bioinformatics online.

  2. Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.

  3. UNIX based client/server hospital information system.

    PubMed

    Nakamura, S; Sakurai, K; Uchiyama, M; Yoshii, Y; Tachibana, N

    1995-01-01

    SMILE (St. Luke's Medical Center Information Linkage Environment) is a HIS which is a client/server system using a UNIX workstation under an open network, LAN(FDDI&10BASE-T). It provides a multivendor environment, high performance with low cost and a user-friendly GUI. However, the client/server architecture with a UNIX workstation does not have the same OLTP environment (ex. TP monor) as the mainframe. So, our system problems and the steps used to solve them were reviewed. Several points that are necessary for a client/server system with a UNIX workstation in the future are presented.

  4. MODBUS APPLICATION AT JEFFERSON LAB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Jianxun; Seaton, Chad; Philip, Sarin

    Modbus is a client/server communication model. In our applications, the embedded Ethernet device XPort is designed as the server and a SoftIOC running EPICS Modbus is the client. The SoftIOC builds a Modbus request from parameter contained in a demand that is sent by the EPICS application to the Modbus Client interface. On reception of the Modbus request, the Modbus server activates a local action to read, write, or achieve some other action. So, the main Modbus server functions are to wait for a Modbus request on 502 TCP port, treat this request, and then build a Modbus response.

  5. ModelTest Server: a web-based tool for the statistical selection of models of nucleotide substitution online

    PubMed Central

    Posada, David

    2006-01-01

    ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102

  6. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    NASA Astrophysics Data System (ADS)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on https://github.com/planetserver References: Baumann, P., et al. (2015) Big Data Analytics for Earth Sciences: the EarthServer approach, International Journal of Digital Earth, doi: 10.1080/17538947.2014.1003106. Cantini, F. et al. (2014) Geophys. Res. Abs., Vol. 16, #EGU2014-3784. Gaddis, L., and T. Hare (2015), Status of tools and data for planetary research, Eos, 96, dos: 10.1029/2015EO041125. Hogan, P., 2011. NASA World Wind: Infrastructure for Spatial Data. Technical report. Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications ACM. Oosthoek, J.H.P, et al. (2013) Advances in Space Research. doi: 10.1016/j.asr.2013.07.002. Rossi, A. P., et al. (2014) PlanetServer/EarthServer: Big Data analytics in Planetary Science. Geophysical Research Abstracts, Vol. 16, #EGU2014-5149.

  7. Implementation of buffy-coat-derived pooled platelet concentrates for internal quality control of light transmission aggregometry: a proof of concept study.

    PubMed

    Prüller, F; Rosskopf, K; Mangge, H; Mahla, E; von Lewinski, D; Weiss, E C; Riegler, A; Enko, D

    2017-12-01

    Essentials In platelet function testing, standardized internal controls (IQC) are not commercially provided. Platelet function testing was performed daily on aliquoted pooled platelet concentrates. Pooled platelet concentrates showed stability for control purposes from Monday to Friday. Pooled platelet concentrates provide the necessary steadiness to serve as IQC material. Background Standardized commercially available control material for internal quality control (IQC) of light transmission aggregometry (LTA) is still lacking. Moreover, the availability of normal blood donors to provide fresh platelets is difficult in small laboratories, where 'volunteers' may be in short supply. Objectives To evaluate the implementation of buffy-coat-derived pooled platelet concentrates (PCs) for IQC material for LTA. Methods We used buffy-coat-derived pooled PCs from the blood bank as IQC material for LTA. On each weekend one PC was prepared (> 200 mL) and aliquoted from the original storage bag on a daily basis in four baby bags (40-50 mL), which were delivered from Monday to Friday to our laboratory. The IQC measurements of at least 85 work-weeks (from Monday to Friday) were evaluated with this new IQC material. LTA was performed on a four-channel Chronolog 700 Aggregometer (Chronolog Corporation, Havertown, PA, USA) (agonists: collagen, adenosine diphosphate [ADP], arachidonic acid [AA] and thrombin receptor activator peptide-6 [TRAP-6]). Results The medians of platelet aggregation from IQC measurements with collagen, ADP and AA from Monday to Friday were 68.0-59.5, 3.0-2.0 and 51.0-50.0%, respectively, and the mean of platelet aggregation with TRAP-6 was 71.2-66.4%. Conclusions Buffy-coat-derived pooled PCs serve as a reliable and robust IQC material for LTA measurements and would be beneficial for the whole laboratory procedure and employees' safety. © 2017 International Society on Thrombosis and Haemostasis.

  8. Prior Individual Training and Self-Organized Queuing during Group Emergency Escape of Mice from Water Pool

    PubMed Central

    Saloma, Caesar; Perez, Gay Jane; Gavile, Catherine Ann; Ick-Joson, Jacqueline Judith; Palmes-Saloma, Cynthia

    2015-01-01

    We study the impact of prior individual training during group emergency evacuation using mice that escape from an enclosed water pool to a dry platform via any of two possible exits. Experimenting with mice avoids serious ethical and legal issues that arise when dealing with unwitting human participants while minimizing concerns regarding the reliability of results obtained from simulated experiments using ‘actors’. First, mice were trained separately and their individual escape times measured over several trials. Mice learned quickly to swim towards an exit–they achieved their fastest escape times within the first four trials. The trained mice were then placed together in the pool and allowed to escape. No two mice were permitted in the pool beforehand and only one could pass through an exit opening at any given time. At first trial, groups of trained mice escaped seven and five times faster than their corresponding control groups of untrained mice at pool occupancy rate ρ of 11.9% and 4%, respectively. Faster evacuation happened because trained mice: (a) had better recognition of the available pool space and took shorter escape routes to an exit, (b) were less likely to form arches that blocked an exit opening, and (c) utilized the two exits efficiently without preference. Trained groups achieved continuous egress without an apparent leader-coordinator (self-organized queuing)—a collective behavior not experienced during individual training. Queuing was unobserved in untrained groups where mice were prone to wall seeking, aimless swimming and/or blind copying that produced circuitous escape routes, biased exit use and clogging. The experiments also reveal that faster and less costly group training at ρ = 4%, yielded an average individual escape time that is comparable with individualized training. However, group training in a more crowded pool (ρ = 11.9%) produced a longer average individual escape time. PMID:25693170

  9. AMMOS2: a web server for protein-ligand-water complexes refinement via molecular mechanics.

    PubMed

    Labbé, Céline M; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O; Pajeva, Ilza; Miteva, Maria A

    2017-07-03

    AMMOS2 is an interactive web server for efficient computational refinement of protein-small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein-ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein-ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein-ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein-ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein-ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein-ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. 3Drefine: an interactive web server for efficient protein structure refinement.

    PubMed

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-07-08

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. AMMOS2: a web server for protein–ligand–water complexes refinement via molecular mechanics

    PubMed Central

    Labbé, Céline M.; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O.; Pajeva, Ilza

    2017-01-01

    Abstract AMMOS2 is an interactive web server for efficient computational refinement of protein–small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein–ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein–ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein–ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein–ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein–ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein–ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. PMID:28486703

  12. CNA web server: rigidity theory-based thermal unfolding simulations of proteins for linking structure, (thermo-)stability, and function.

    PubMed

    Krüger, Dennis M; Rathi, Prakash Chandra; Pfleger, Christopher; Gohlke, Holger

    2013-07-01

    The Constraint Network Analysis (CNA) web server provides a user-friendly interface to the CNA approach developed in our laboratory for linking results from rigidity analyses to biologically relevant characteristics of a biomolecular structure. The CNA web server provides a refined modeling of thermal unfolding simulations that considers the temperature dependence of hydrophobic tethers and computes a set of global and local indices for quantifying biomacromolecular stability. From the global indices, phase transition points are identified where the structure switches from a rigid to a floppy state; these phase transition points can be related to a protein's (thermo-)stability. Structural weak spots (unfolding nuclei) are automatically identified, too; this knowledge can be exploited in data-driven protein engineering. The local indices are useful in linking flexibility and function and to understand the impact of ligand binding on protein flexibility. The CNA web server robustly handles small-molecule ligands in general. To overcome issues of sensitivity with respect to the input structure, the CNA web server allows performing two ensemble-based variants of thermal unfolding simulations. The web server output is provided as raw data, plots and/or Jmol representations. The CNA web server, accessible at http://cpclab.uni-duesseldorf.de/cna or http://www.cnanalysis.de, is free and open to all users with no login requirement.

  13. CNA web server: rigidity theory-based thermal unfolding simulations of proteins for linking structure, (thermo-)stability, and function

    PubMed Central

    Krüger, Dennis M.; Rathi, Prakash Chandra; Pfleger, Christopher; Gohlke, Holger

    2013-01-01

    The Constraint Network Analysis (CNA) web server provides a user-friendly interface to the CNA approach developed in our laboratory for linking results from rigidity analyses to biologically relevant characteristics of a biomolecular structure. The CNA web server provides a refined modeling of thermal unfolding simulations that considers the temperature dependence of hydrophobic tethers and computes a set of global and local indices for quantifying biomacromolecular stability. From the global indices, phase transition points are identified where the structure switches from a rigid to a floppy state; these phase transition points can be related to a protein’s (thermo-)stability. Structural weak spots (unfolding nuclei) are automatically identified, too; this knowledge can be exploited in data-driven protein engineering. The local indices are useful in linking flexibility and function and to understand the impact of ligand binding on protein flexibility. The CNA web server robustly handles small-molecule ligands in general. To overcome issues of sensitivity with respect to the input structure, the CNA web server allows performing two ensemble-based variants of thermal unfolding simulations. The web server output is provided as raw data, plots and/or Jmol representations. The CNA web server, accessible at http://cpclab.uni-duesseldorf.de/cna or http://www.cnanalysis.de, is free and open to all users with no login requirement. PMID:23609541

  14. Design and implementation of online automatic judging system

    NASA Astrophysics Data System (ADS)

    Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng

    2017-06-01

    For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.

  15. Using Google Earth to Assess Shade for Sun Protection in Urban Recreation Spaces: Methods and Results.

    PubMed

    Gage, R; Wilson, N; Signal, L; Barr, M; Mackay, C; Reeder, A; Thomson, G

    2018-05-16

    Shade in public spaces can lower the risk of and sun burning and skin cancer. However, existing methods of auditing shade require travel between sites, and sunny weather conditions. This study aimed to evaluate the feasibility of free computer software-Google Earth-for assessing shade in urban open spaces. A shade projection method was developed that uses Google Earth street view and aerial images to estimate shade at solar noon on the summer solstice, irrespective of the date of image capture. Three researchers used the method to separately estimate shade cover over pre-defined activity areas in a sample of 45 New Zealand urban open spaces, including 24 playgrounds, 12 beaches and 9 outdoor pools. Outcome measures included method accuracy (assessed by comparison with a subsample of field observations of 10 of the settings) and inter-rater reliability. Of the 164 activity areas identified in the 45 settings, most (83%) had no shade cover. The method identified most activity areas in playgrounds (85%) and beaches (93%) and was accurate for assessing shade over these areas (predictive values of 100%). Only 8% of activity areas at outdoor pools were identified, due to a lack of street view images. Reliability for shade cover estimates was excellent (intraclass correlation coefficient of 0.97, 95% CI 0.97-0.98). Google Earth appears to be a reasonably accurate and reliable and shade audit tool for playgrounds and beaches. The findings are relevant for programmes focused on supporting the development of healthy urban open spaces.

  16. Client-Side Image Maps: Achieving Accessibility and Section 508 Compliance

    ERIC Educational Resources Information Center

    Beasley, William; Jarvis, Moana

    2004-01-01

    Image maps are a means of making a picture "clickable", so that different portions of the image can be hyperlinked to different URLS. There are two basic types of image maps: server-side and client-side. Besides requiring access to a CGI on the server, server-side image maps are undesirable from the standpoint of accessibility--creating…

  17. Meteosat Indian Ocean Data Coverage (IODC): Full Disk - NOAA GOES

    Science.gov Websites

    Geostationary Satellite Server » DOC » NOAA » NESDIS » OSPO NOAA GOES Geostationary Satellite Server NOAA GOES Geostationary Satellite Server Click to Search GENERAL Home Channel Overview Site loops. These images are updated every six hours from data provided by Europe's Meteorological Satellite

  18. Realizing the Potential of Information Resources: Information, Technology, and Services. Track 3: Serving Clients with Client/Server.

    ERIC Educational Resources Information Center

    CAUSE, Boulder, CO.

    Eight papers are presented from the 1995 CAUSE conference track on client/server issues faced by managers of information technology at colleges and universities. The papers include: (1) "The Realities of Client/Server Development and Implementation" (Mary Ann Carr and Alan Hartwig), which examines Carnegie Mellon University's transition…

  19. 78 FR 49586 - Self-Regulatory Organizations; Miami International Securities Exchange LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... Market Maker Standard quote server as a gateway for communicating eQuotes to MIAX. Because of the... connect the Limited Service Ports to independent servers that host their eQuote and purge functionality... same server for all of their Market Maker quoting activity. Currently, Market Makers in the MIAX System...

  20. 78 FR 70615 - Self-Regulatory Organizations; Miami International Securities Exchange LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-26

    ... rather than forcing them to use their Market Maker Standard quote server as a gateway for communicating e... technical flexibility to connect additional Limited Service Ports to independent servers that host their e... mitigate the risk of using the same server for all of their Market Maker quoting activity. By using the...

Top