FireProt: web server for automated design of thermostable proteins
Musil, Milos; Stourac, Jan; Brezovsky, Jan; Prokop, Zbynek; Zendulka, Jaroslav; Martinek, Tomas
2017-01-01
Abstract There is a continuous interest in increasing proteins stability to enhance their usability in numerous biomedical and biotechnological applications. A number of in silico tools for the prediction of the effect of mutations on protein stability have been developed recently. However, only single-point mutations with a small effect on protein stability are typically predicted with the existing tools and have to be followed by laborious protein expression, purification, and characterization. Here, we present FireProt, a web server for the automated design of multiple-point thermostable mutant proteins that combines structural and evolutionary information in its calculation core. FireProt utilizes sixteen tools and three protein engineering strategies for making reliable protein designs. The server is complemented with interactive, easy-to-use interface that allows users to directly analyze and optionally modify designed thermostable mutants. FireProt is freely available at http://loschmidt.chemi.muni.cz/fireprot. PMID:28449074
2010-10-01
Requirements Application Server BEA Weblogic Express 9.2 or higher Java v5Apache Struts v2 Hibernate v2 C3PO SQL*Net client / JDBC Database Server...designed for the desktop o An HTML and JavaScript browser-based front end designed for mobile Smartphones - A Java -based framework utilizing Apache...Technology Requirements The recommended technologies are as follows: Technology Use Requirements Java Application Provides the backend application
Condie, Brian G; Urbanski, William M
2014-01-01
Effective tools for searching the biomedical literature are essential for identifying reagents or mouse strains as well as for effective experimental design and informed interpretation of experimental results. We have built the Textpresso Site Specific Recombinases (Textpresso SSR) Web server to enable researchers who use mice to perform in-depth searches of a rapidly growing and complex part of the mouse literature. Our Textpresso Web server provides an interface for searching the full text of most of the peer-reviewed publications that report the characterization or use of mouse strains that express Cre or Flp recombinase. The database also contains most of the publications that describe the characterization or analysis of strains carrying conditional alleles or transgenes that can be inactivated or activated by site-specific recombinases such as Cre or Flp. Textpresso SSR complements the existing online databases that catalog Cre and Flp expression patterns by providing a unique online interface for the in-depth text mining of the site specific recombinase literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Chin
This Technical Note describes how the Zettar team came up with a data transfer cluster design that convincingly proved the feasibility of using high-density servers for high-performance Big Data transfers. It then outlines the tests, operations, and observations that address a potential over-heating concern regarding the use of Non-Volatile Memory Host Controller Interface Specification (NVMHCI aka NVM Express or NVMe) Gen 3 PCIe SSD cards in high-density servers. Finally, it points out the possibility of developing a new generation of high-performance Science DMZ data transfer system for the data-intensive research community and commercial enterprises.
SEGEL: A Web Server for Visualization of Smoking Effects on Human Lung Gene Expression.
Xu, Yan; Hu, Brian; Alnajm, Sammy S; Lu, Yin; Huang, Yangxin; Allen-Gipson, Diane; Cheng, Feng
2015-01-01
Cigarette smoking is a major cause of death worldwide resulting in over six million deaths per year. Cigarette smoke contains complex mixtures of chemicals that are harmful to nearly all organs of the human body, especially the lungs. Cigarette smoking is considered the major risk factor for many lung diseases, particularly chronic obstructive pulmonary diseases (COPD) and lung cancer. However, the underlying molecular mechanisms of smoking-induced lung injury associated with these lung diseases still remain largely unknown. Expression microarray techniques have been widely applied to detect the effects of smoking on gene expression in different human cells in the lungs. These projects have provided a lot of useful information for researchers to understand the potential molecular mechanism(s) of smoke-induced pathogenesis. However, a user-friendly web server that would allow scientists to fast query these data sets and compare the smoking effects on gene expression across different cells had not yet been established. For that reason, we have integrated eight public expression microarray data sets from trachea epithelial cells, large airway epithelial cells, small airway epithelial cells, and alveolar macrophage into an online web server called SEGEL (Smoking Effects on Gene Expression of Lung). Users can query gene expression patterns across these cells from smokers and nonsmokers by gene symbols, and find the effects of smoking on the gene expression of lungs from this web server. Sex difference in response to smoking is also shown. The relationship between the gene expression and cigarette smoking consumption were calculated and are shown in the server. The current version of SEGEL web server contains 42,400 annotated gene probe sets represented on the Affymetrix Human Genome U133 Plus 2.0 platform. SEGEL will be an invaluable resource for researchers interested in the effects of smoking on gene expression in the lungs. The server also provides useful information for drug development against smoking-related diseases. The SEGEL web server is available online at http://www.chengfeng.info/smoking_database.html.
LocExpress: a web server for efficiently estimating expression of novel transcripts.
Hou, Mei; Tian, Feng; Jiang, Shuai; Kong, Lei; Yang, Dechang; Gao, Ge
2016-12-22
The temporal and spatial-specific expression pattern of a transcript in multiple tissues and cell types can indicate key clues about its function. While several gene atlas available online as pre-computed databases for known gene models, it's still challenging to get expression profile for previously uncharacterized (i.e. novel) transcripts efficiently. Here we developed LocExpress, a web server for efficiently estimating expression of novel transcripts across multiple tissues and cell types in human (20 normal tissues/cells types and 14 cell lines) as well as in mouse (24 normal tissues/cell types and nine cell lines). As a wrapper to RNA-Seq quantification algorithm, LocExpress efficiently reduces the time cost by making abundance estimation calls increasingly within the minimum spanning bundle region of input transcripts. For a given novel gene model, such local context-oriented strategy allows LocExpress to estimate its FPKMs in hundreds of samples within minutes on a standard Linux box, making an online web server possible. To the best of our knowledge, LocExpress is the only web server to provide nearly real-time expression estimation for novel transcripts in common tissues and cell types. The server is publicly available at http://loc-express.cbi.pku.edu.cn .
CEM-designer: design of custom expression microarrays in the post-ENCODE Era.
Arnold, Christian; Externbrink, Fabian; Hackermüller, Jörg; Reiche, Kristin
2014-11-10
Microarrays are widely used in gene expression studies, and custom expression microarrays are popular to monitor expression changes of a customer-defined set of genes. However, the complexity of transcriptomes uncovered recently make custom expression microarray design a non-trivial task. Pervasive transcription and alternative processing of transcripts generate a wealth of interweaved transcripts that requires well-considered probe design strategies and is largely neglected in existing approaches. We developed the web server CEM-Designer that facilitates microarray platform independent design of custom expression microarrays for complex transcriptomes. CEM-Designer covers (i) the collection and generation of a set of unique target sequences from different sources and (ii) the selection of a set of sensitive and specific probes that optimally represents the target sequences. Probe design itself is left to third party software to ensure that probes meet provider-specific constraints. CEM-Designer is available at http://designpipeline.bioinf.uni-leipzig.de. Copyright © 2014 Elsevier B.V. All rights reserved.
Nadkarni, P M
1997-08-01
Concept Locator (CL) is a client-server application that accesses a Sybase relational database server containing a subset of the UMLS Metathesaurus for the purpose of retrieval of concepts corresponding to one or more query expressions supplied to it. CL's query grammar permits complex Boolean expressions, wildcard patterns, and parenthesized (nested) subexpressions. CL translates the query expressions supplied to it into one or more SQL statements that actually perform the retrieval. The generated SQL is optimized by the client to take advantage of the strengths of the server's query optimizer, and sidesteps its weaknesses, so that execution is reasonably efficient.
TIPMaP: a web server to establish transcript isoform profiles from reliable microarray probes.
Chitturi, Neelima; Balagannavar, Govindkumar; Chandrashekar, Darshan S; Abinaya, Sadashivam; Srini, Vasan S; Acharya, Kshitish K
2013-12-27
Standard 3' Affymetrix gene expression arrays have contributed a significantly higher volume of existing gene expression data than other microarray platforms. These arrays were designed to identify differentially expressed genes, but not their alternatively spliced transcript forms. No resource can currently identify expression pattern of specific mRNA forms using these microarray data, even though it is possible to do this. We report a web server for expression profiling of alternatively spliced transcripts using microarray data sets from 31 standard 3' Affymetrix arrays for human, mouse and rat species. The tool has been experimentally validated for mRNAs transcribed or not-detected in a human disease condition (non-obstructive azoospermia, a male infertility condition). About 4000 gene expression datasets were downloaded from a public repository. 'Good probes' with complete coverage and identity to latest reference transcript sequences were first identified. Using them, 'Transcript specific probe-clusters' were derived for each platform and used to identify expression status of possible transcripts. The web server can lead the user to datasets corresponding to specific tissues, conditions via identifiers of the microarray studies or hybridizations, keywords, official gene symbols or reference transcript identifiers. It can identify, in the tissues and conditions of interest, about 40% of known transcripts as 'transcribed', 'not-detected' or 'differentially regulated'. Corresponding additional information for probes, genes, transcripts and proteins can be viewed too. We identified the expression of transcripts in a specific clinical condition and validated a few of these transcripts by experiments (using reverse transcription followed by polymerase chain reaction). The experimental observations indicated higher agreements with the web server results, than contradictions. The tool is accessible at http://resource.ibab.ac.in/TIPMaP. The newly developed online tool forms a reliable means for identification of alternatively spliced transcript-isoforms that may be differentially expressed in various tissues, cell types or physiological conditions. Thus, by making better use of existing data, TIPMaP avoids the dependence on precious tissue-samples, in experiments with a goal to establish expression profiles of alternative splice forms--at least in some cases.
Expitope: a web server for epitope expression.
Haase, Kerstin; Raffegerst, Silke; Schendel, Dolores J; Frishman, Dmitrij
2015-06-01
Adoptive T cell therapies based on introduction of new T cell receptors (TCRs) into patient recipient T cells is a promising new treatment for various kinds of cancers. A major challenge, however, is the choice of target antigens. If an engineered TCR can cross-react with self-antigens in healthy tissue, the side-effects can be devastating. We present the first web server for assessing epitope sharing when designing new potential lead targets. We enable the users to find all known proteins containing their peptide of interest. The web server returns not only exact matches, but also approximate ones, allowing a number of mismatches of the users choice. For the identified candidate proteins the expression values in various healthy tissues, representing all vital human organs, are extracted from RNA Sequencing (RNA-Seq) data as well as from some cancer tissues as control. All results are returned to the user sorted by a score, which is calculated using well-established methods and tools for immunological predictions. It depends on the probability that the epitope is created by proteasomal cleavage and its affinities to the transporter associated with antigen processing and the major histocompatibility complex class I alleles. With this framework, we hope to provide a helpful tool to exclude potential cross-reactivity in the early stage of TCR selection for use in design of adoptive T cell immunotherapy. The Expitope web server can be accessed via http://webclu.bio.wzw.tum.de/expitope. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Manijak, Mieszko P; Nielsen, Henrik B
2011-06-11
Although, systematic analysis of gene annotation is a powerful tool for interpreting gene expression data, it sometimes is blurred by incomplete gene annotation, missing expression response of key genes and secondary gene expression responses. These shortcomings may be partially circumvented by instead matching gene expression signatures to signatures of other experiments. To facilitate this we present the Functional Association Response by Overlap (FARO) server, that match input signatures to a compendium of 242 gene expression signatures, extracted from more than 1700 Arabidopsis microarray experiments. Hereby we present a publicly available tool for robust characterization of Arabidopsis gene expression experiments which can point to similar experimental factors in other experiments. The server is available at http://www.cbs.dtu.dk/services/faro/.
NASA Astrophysics Data System (ADS)
Antony, Joby; Mathuria, D. S.; Chaudhary, Anup; Datta, T. S.; Maity, T.
2017-02-01
Cryogenic network for linear accelerator operations demand a large number of Cryogenic sensors, associated instruments and other control-instrumentation to measure, monitor and control different cryogenic parameters remotely. Here we describe an alternate approach of six types of newly designed integrated intelligent cryogenic instruments called device-servers which has the complete circuitry for various sensor-front-end analog instrumentation and the common digital back-end http-server built together, to make crateless PLC-free model of controls and data acquisition. These identified instruments each sensor-specific viz. LHe server, LN2 Server, Control output server, Pressure server, Vacuum server and Temperature server are completely deployed over LAN for the cryogenic operations of IUAC linac (Inter University Accelerator Centre linear Accelerator), New Delhi. This indigenous design gives certain salient features like global connectivity, low cost due to crateless model, easy signal processing due to integrated design, less cabling and device-interconnectivity etc.
Dittmar, W James; McIver, Lauren; Michalak, Pawel; Garner, Harold R; Valdez, Gregorio
2014-07-01
The wealth of publicly available gene expression and genomic data provides unique opportunities for computational inference to discover groups of genes that function to control specific cellular processes. Such genes are likely to have co-evolved and be expressed in the same tissues and cells. Unfortunately, the expertise and computational resources required to compare tens of genomes and gene expression data sets make this type of analysis difficult for the average end-user. Here, we describe the implementation of a web server that predicts genes involved in affecting specific cellular processes together with a gene of interest. We termed the server 'EvoCor', to denote that it detects functional relationships among genes through evolutionary analysis and gene expression correlation. This web server integrates profiles of sequence divergence derived by a Hidden Markov Model (HMM) and tissue-wide gene expression patterns to determine putative functional linkages between pairs of genes. This server is easy to use and freely available at http://pilot-hmm.vbi.vt.edu/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
DiRE: identifying distant regulatory elements of co-expressed genes
Gotea, Valer; Ovcharenko, Ivan
2008-01-01
Regulation of gene expression in eukaryotic genomes is established through a complex cooperative activity of proximal promoters and distant regulatory elements (REs) such as enhancers, repressors and silencers. We have developed a web server named DiRE, based on the Enhancer Identification (EI) method, for predicting distant regulatory elements in higher eukaryotic genomes, namely for determining their chromosomal location and functional characteristics. The server uses gene co-expression data, comparative genomics and profiles of transcription factor binding sites (TFBSs) to determine TFBS-association signatures that can be used for discriminating specific regulatory functions. DiRE's unique feature is its ability to detect REs outside of proximal promoter regions, as it takes advantage of the full gene locus to conduct the search. DiRE can predict common REs for any set of input genes for which the user has prior knowledge of co-expression, co-function or other biologically meaningful grouping. The server predicts function-specific REs consisting of clusters of specifically-associated TFBSs and it also scores the association of individual transcription factors (TFs) with the biological function shared by the group of input genes. Its integration with the Array2BIO server allows users to start their analysis with raw microarray expression data. The DiRE web server is freely available at http://dire.dcode.org. PMID:18487623
Embedded controller for GEM detector readout system
NASA Astrophysics Data System (ADS)
Zabołotny, Wojciech M.; Byszuk, Adrian; Chernyshova, Maryna; Cieszewski, Radosław; Czarski, Tomasz; Dominik, Wojciech; Jakubowska, Katarzyna L.; Kasprowicz, Grzegorz; Poźniak, Krzysztof; Rzadkiewicz, Jacek; Scholz, Marek
2013-10-01
This paper describes the embedded controller used for the multichannel readout system for the GEM detector. The controller is based on the embedded Mini ITX mainboard, running the GNU/Linux operating system. The controller offers two interfaces to communicate with the FPGA based readout system. FPGA configuration and diagnostics is controlled via low speed USB based interface, while high-speed setup of the readout parameters and reception of the measured data is handled by the PCI Express (PCIe) interface. Hardware access is synchronized by the dedicated server written in C. Multiple clients may connect to this server via TCP/IP network, and different priority is assigned to individual clients. Specialized protocols have been implemented both for low level access on register level and for high level access with transfer of structured data with "msgpack" protocol. High level functionalities have been split between multiple TCP/IP servers for parallel operation. Status of the system may be checked, and basic maintenance may be performed via web interface, while the expert access is possible via SSH server. System was designed with reliability and flexibility in mind.
Paying for Express Checkout: Competition and Price Discrimination in Multi-Server Queuing Systems
Deck, Cary; Kimbrough, Erik O.; Mongrain, Steeve
2014-01-01
We model competition between two firms selling identical goods to customers who arrive in the market stochastically. Shoppers choose where to purchase based upon both price and the time cost associated with waiting for service. One seller provides two separate queues, each with its own server, while the other seller has a single queue and server. We explore the market impact of the multi-server seller engaging in waiting cost-based-price discrimination by charging a premium for express checkout. Specifically, we analyze this situation computationally and through the use of controlled laboratory experiments. We find that this form of price discrimination is harmful to sellers and beneficial to consumers. When the two-queue seller offers express checkout for impatient customers, the single queue seller focuses on the patient shoppers thereby driving down prices and profits while increasing consumer surplus. PMID:24667809
Paying for express checkout: competition and price discrimination in multi-server queuing systems.
Deck, Cary; Kimbrough, Erik O; Mongrain, Steeve
2014-01-01
We model competition between two firms selling identical goods to customers who arrive in the market stochastically. Shoppers choose where to purchase based upon both price and the time cost associated with waiting for service. One seller provides two separate queues, each with its own server, while the other seller has a single queue and server. We explore the market impact of the multi-server seller engaging in waiting cost-based-price discrimination by charging a premium for express checkout. Specifically, we analyze this situation computationally and through the use of controlled laboratory experiments. We find that this form of price discrimination is harmful to sellers and beneficial to consumers. When the two-queue seller offers express checkout for impatient customers, the single queue seller focuses on the patient shoppers thereby driving down prices and profits while increasing consumer surplus.
File Server-Based CD-ROM Networking: Using SCSI Express.
ERIC Educational Resources Information Center
McQueen, Howard
1992-01-01
Provides guidelines for evaluating SCSI Express Novell 386, a new product allowing CD-ROM drives to be attached to a Netware 3.11 file server, increasing CD-ROM networking capability. Specific limitations concerning software, hardware, and human resources are outlined, as well as its unique features and potential for future networking uses. (EA)
psRNATarget: a plant small RNA target analysis server
Dai, Xinbin; Zhao, Patrick Xuechun
2011-01-01
Plant endogenous non-coding short small RNAs (20–24 nt), including microRNAs (miRNAs) and a subset of small interfering RNAs (ta-siRNAs), play important role in gene expression regulatory networks (GRNs). For example, many transcription factors and development-related genes have been reported as targets of these regulatory small RNAs. Although a number of miRNA target prediction algorithms and programs have been developed, most of them were designed for animal miRNAs which are significantly different from plant miRNAs in the target recognition process. These differences demand the development of separate plant miRNA (and ta-siRNA) target analysis tool(s). We present psRNATarget, a plant small RNA target analysis server, which features two important analysis functions: (i) reverse complementary matching between small RNA and target transcript using a proven scoring schema, and (ii) target-site accessibility evaluation by calculating unpaired energy (UPE) required to ‘open’ secondary structure around small RNA’s target site on mRNA. The psRNATarget incorporates recent discoveries in plant miRNA target recognition, e.g. it distinguishes translational and post-transcriptional inhibition, and it reports the number of small RNA/target site pairs that may affect small RNA binding activity to target transcript. The psRNATarget server is designed for high-throughput analysis of next-generation data with an efficient distributed computing back-end pipeline that runs on a Linux cluster. The server front-end integrates three simplified user-friendly interfaces to accept user-submitted or preloaded small RNAs and transcript sequences; and outputs a comprehensive list of small RNA/target pairs along with the online tools for batch downloading, key word searching and results sorting. The psRNATarget server is freely available at http://plantgrn.noble.org/psRNATarget/. PMID:21622958
MADGE: scalable distributed data management software for cDNA microarrays.
McIndoe, Richard A; Lanzen, Aaron; Hurtz, Kimberly
2003-01-01
The human genome project and the development of new high-throughput technologies have created unparalleled opportunities to study the mechanism of diseases, monitor the disease progression and evaluate effective therapies. Gene expression profiling is a critical tool to accomplish these goals. The use of nucleic acid microarrays to assess the gene expression of thousands of genes simultaneously has seen phenomenal growth over the past five years. Although commercial sources of microarrays exist, investigators wanting more flexibility in the genes represented on the array will turn to in-house production. The creation and use of cDNA microarrays is a complicated process that generates an enormous amount of information. Effective data management of this information is essential to efficiently access, analyze, troubleshoot and evaluate the microarray experiments. We have developed a distributable software package designed to track and store the various pieces of data generated by a cDNA microarray facility. This includes the clone collection storage data, annotation data, workflow queues, microarray data, data repositories, sample submission information, and project/investigator information. This application was designed using a 3-tier client server model. The data access layer (1st tier) contains the relational database system tuned to support a large number of transactions. The data services layer (2nd tier) is a distributed COM server with full database transaction support. The application layer (3rd tier) is an internet based user interface that contains both client and server side code for dynamic interactions with the user. This software is freely available to academic institutions and non-profit organizations at http://www.genomics.mcg.edu/niddkbtc.
Interfaces for Distributed Systems of Information Servers.
ERIC Educational Resources Information Center
Kahle, Brewster; And Others
1992-01-01
Describes two systems--Wide Area Information Servers (WAIS) and Rosebud--that provide protocol-based mechanisms for accessing remote full-text information servers. Design constraints, human interface design, and implementation are examined for five interfaces to these systems developed to run on the Macintosh or Unix terminals. Sample screen…
Experimental parametric study of servers cooling management in data centers buildings
NASA Astrophysics Data System (ADS)
Nada, S. A.; Elfeky, K. E.; Attia, Ali M. A.; Alshaer, W. G.
2017-06-01
A parametric study of air flow and cooling management of data centers servers is experimentally conducted for different design conditions. A physical scale model of data center accommodating one rack of four servers was designed and constructed for testing purposes. Front and rear rack and server's temperatures distributions and supply/return heat indices (SHI/RHI) are used to evaluate data center thermal performance. Experiments were conducted to parametrically study the effects of perforated tiles opening ratio, servers power load variation and rack power density. The results showed that (1) perforated tile of 25% opening ratio provides the best results among the other opening ratios, (2) optimum benefit of cold air in servers cooling is obtained at uniformly power loading of servers (3) increasing power density decrease air re-circulation but increase air bypass and servers temperature. The present results are compared with previous experimental and CFD results and fair agreement was found.
An Evaluation of Alternative Designs for a Grid Information Service
NASA Technical Reports Server (NTRS)
Smith, Warren; Waheed, Abdul; Meyers, David; Yan, Jerry; Kwak, Dochan (Technical Monitor)
2001-01-01
The Globus information service wasn't working well. There were many updates of data from Globus daemons which saturated the single server and users couldn't retrieve information. We created a second server for NASA and Alliance. Things were great on that server, but a bit slow on the other server. We needed to know exactly how the information service was being used. What were the best servers and configurations? This viewgraph presentation gives an overview of the evaluation of alternative designs for a Grid Information Service. Details are given on the workload characterization, methodology used, and the performance evaluation.
aGEM: an integrative system for analyzing spatial-temporal gene-expression information
Jiménez-Lozano, Natalia; Segura, Joan; Macías, José Ramón; Vega, Juanjo; Carazo, José María
2009-01-01
Motivation: The work presented here describes the ‘anatomical Gene-Expression Mapping (aGEM)’ Platform, a development conceived to integrate phenotypic information with the spatial and temporal distributions of genes expressed in the mouse. The aGEM Platform has been built by extending the Distributed Annotation System (DAS) protocol, which was originally designed to share genome annotations over the WWW. DAS is a client-server system in which a single client integrates information from multiple distributed servers. Results: The aGEM Platform provides information to answer three main questions. (i) Which genes are expressed in a given mouse anatomical component? (ii) In which mouse anatomical structures are a given gene or set of genes expressed? And (iii) is there any correlation among these findings? Currently, this Platform includes several well-known mouse resources (EMAGE, GXD and GENSAT), hosting gene-expression data mostly obtained from in situ techniques together with a broad set of image-derived annotations. Availability: The Platform is optimized for Firefox 3.0 and it is accessed through a friendly and intuitive display: http://agem.cnb.csic.es Contact: natalia@cnb.csic.es Supplementary information: Supplementary data are available at http://bioweb.cnb.csic.es/VisualOmics/aGEM/home.html and http://bioweb.cnb.csic.es/VisualOmics/index_VO.html and Bioinformatics online. PMID:19592395
T-Epitope Designer: A HLA-peptide binding prediction server.
Kangueane, Pandjassarame; Sakharkar, Meena Kishore
2005-05-15
The current challenge in synthetic vaccine design is the development of a methodology to identify and test short antigen peptides as potential T-cell epitopes. Recently, we described a HLA-peptide binding model (using structural properties) capable of predicting peptides binding to any HLA allele. Consequently, we have developed a web server named T-EPITOPE DESIGNER to facilitate HLA-peptide binding prediction. The prediction server is based on a model that defines peptide binding pockets using information gleaned from X-ray crystal structures of HLA-peptide complexes, followed by the estimation of peptide binding to binding pockets. Thus, the prediction server enables the calculation of peptide binding to HLA alleles. This model is superior to many existing methods because of its potential application to any given HLA allele whose sequence is clearly defined. The web server finds potential application in T cell epitope vaccine design. http://www.bioinformation.net/ted/
ERIC Educational Resources Information Center
Simons-Morton, Bruce G.; Cummings, Sharon Snider
1997-01-01
Evaluates the impact of beverage servers' interventions at five establishments participating in the Houston Techniques for Effective Alcohol Management (TEAM) program. The intervention included server training, a designated-driver program, and "Safe Ride Home" taxi vouchers. Findings are discussed within the context of scant public and…
ERIC Educational Resources Information Center
de Miranda, John
The field of alcohol server awareness and training has grown dramatically in the past several years and the idea of training servers to reduce alcohol problems has become a central fixture in the current alcohol policy debate. The San Mateo County, California Server Information Program (SIP) is a community-based prevention strategy designed to…
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B; Dimas, Antigone S; Gutierrez-Arcelus, Maria; Stranger, Barbara E; Deloukas, Panos; Dermitzakis, Emmanouil T
2010-10-01
Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. http://www.sanger.ac.uk/resources/software/genevar.
NASA Astrophysics Data System (ADS)
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Xu, Huayong; Yu, Hui; Tu, Kang; Shi, Qianqian; Wei, Chaochun; Li, Yuan-Yuan; Li, Yi-Xue
2013-01-01
We are witnessing rapid progress in the development of methodologies for building the combinatorial gene regulatory networks involving both TFs (Transcription Factors) and miRNAs (microRNAs). There are a few tools available to do these jobs but most of them are not easy to use and not accessible online. A web server is especially needed in order to allow users to upload experimental expression datasets and build combinatorial regulatory networks corresponding to their particular contexts. In this work, we compiled putative TF-gene, miRNA-gene and TF-miRNA regulatory relationships from forward-engineering pipelines and curated them as built-in data libraries. We streamlined the R codes of our two separate forward-and-reverse engineering algorithms for combinatorial gene regulatory network construction and formalized them as two major functional modules. As a result, we released the cGRNB (combinatorial Gene Regulatory Networks Builder): a web server for constructing combinatorial gene regulatory networks through integrated engineering of seed-matching sequence information and gene expression datasets. The cGRNB enables two major network-building modules, one for MPGE (miRNA-perturbed gene expression) datasets and the other for parallel miRNA/mRNA expression datasets. A miRNA-centered two-layer combinatorial regulatory cascade is the output of the first module and a comprehensive genome-wide network involving all three types of combinatorial regulations (TF-gene, TF-miRNA, and miRNA-gene) are the output of the second module. In this article we propose cGRNB, a web server for building combinatorial gene regulatory networks through integrated engineering of seed-matching sequence information and gene expression datasets. Since parallel miRNA/mRNA expression datasets are rapidly accumulated by the advance of next-generation sequencing techniques, cGRNB will be very useful tool for researchers to build combinatorial gene regulatory networks based on expression datasets. The cGRNB web-server is free and available online at http://www.scbit.org/cgrnb.
Modular Mount Control System for Telescopes
NASA Astrophysics Data System (ADS)
Mooney, J.; Cleis, R.; Kyono, T.; Edwards, M.
The Space Observatory Control Kit (SpOCK) is the hardware, computers and software used to run small and large telescopes in the RDS division of the Air Force Research Laboratories (AFRL). The system is used to track earth satellites, celestial objects, terrestrial objects and aerial objects. The system will track general targets when provided with state vectors in one of five coordinate systems. Client-toserver and server-to-gimbals communication occurs via human-readable s-expressions that may be evaluated by the computer language called Racket. Software verification is achieved by scripts that exercise these expressions by sending them to the server, and receiving the expressions that the server evaluates. This paper describes the adaptation of a modular mount control system developed primarily for LEO satellite imaging on large and small portable AFRL telescopes with a goal of orbit determination and the generation of satellite metrics.
NASA Technical Reports Server (NTRS)
Lyle, Stacey D.
2009-01-01
A software package that has been designed to allow authentication for determining if the rover(s) is/are within a set of boundaries or a specific area to access critical geospatial information by using GPS signal structures as a means to authenticate mobile devices into a network wirelessly and in real-time has been developed. The advantage lies in that the system only allows those with designated geospatial boundaries or areas into the server. The Geospatial Authentication software has two parts Server and Client. The server software is a virtual private network (VPN) developed in Linux operating system using Perl programming language. The server can be a stand-alone VPN server or can be combined with other applications and services. The client software is a GUI Windows CE software, or Mobile Graphical Software, that allows users to authenticate into a network. The purpose of the client software is to pass the needed satellite information to the server for authentication.
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B.; Dimas, Antigone S.; Gutierrez-Arcelus, Maria; Stranger, Barbara E.; Deloukas, Panos; Dermitzakis, Emmanouil T.
2010-01-01
Summary: Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. Availability: http://www.sanger.ac.uk/resources/software/genevar Contact: emmanouil.dermitzakis@unige.ch PMID:20702402
NASA Astrophysics Data System (ADS)
Aditya Parikesit, Arli; Nurdiansyah, Rizki
2018-01-01
The research for finding the cure for breast cancer is currently entering the interesting phase of the transcriptomics based method. With the application of Next Generation Sequencing (NGS), molecular information on breast cancer could be gathered. Thus, both in silico and wet lab research has determined that the role of lincRNA-RoR/miR-145/ARF6 expression Pathway could not be ignored as one of the cardinal starting points for Triple-Negative Breast Cancer (TNBC). As the most hazardous type of breast cancer, TNBC should be treated with the most advanced approach that available in the scientific community. Bioinformatics approach has found the possible siRNA-based drug candidates for TNBC. It was found that siRNA that interfere with lincRNA-ROR and mRNA ARF6 could be a feasible opportunity as the drug candidate for TNBC. However, this claim should be validated with more thorough thermodynamics and kinetics computational approach as the comprehensive way to comprehend their molecular repertoire. In this respect, the claim was validated using various tools such as the RNAfold server to determine the 2D structure, Barriers server to comprehend the RNA folding kinetics, RNAeval server to validate the siRNA-target interaction. It was found that the thermodynamics and kinetics repertoire of the siRNA are indeed rational and feasible. In this end, our computation approach has proven that our designed siRNA could interact with lincRNA-RoR/miR-145/ARF6 expression Pathway.
Design and implementation of streaming media server cluster based on FFMpeg.
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.
Design and Implementation of Streaming Media Server Cluster Based on FFMpeg
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187
Kozakov, Dima; Grove, Laurie E.; Hall, David R.; Bohnuud, Tanggis; Mottarella, Scott; Luo, Lingqi; Xia, Bing; Beglov, Dmitri; Vajda, Sandor
2016-01-01
FTMap is a computational mapping server that identifies binding hot spots of macromolecules, i.e., regions of the surface with major contributions to the ligand binding free energy. To use FTMap, users submit a protein, DNA, or RNA structure in PDB format. FTMap samples billions of positions of small organic molecules used as probes and scores the probe poses using a detailed energy expression. Regions that bind clusters of multiple probe types identify the binding hot spots, in good agreement with experimental data. FTMap serves as basis for other servers, namely FTSite to predict ligand binding sites, FTFlex to account for side chain flexibility, FTMap/param to parameterize additional probes, and FTDyn to map ensembles of protein structures. Applications include determining druggability of proteins, identifying ligand moieties that are most important for binding, finding the most bound-like conformation in ensembles of unliganded protein structures, and providing input for fragment based drug design. FTMap is more accurate than classical mapping methods such as GRID and MCSS, and is much faster than the more recent approaches to protein mapping based on mixed molecular dynamics. Using 16 probe molecules, the FTMap server finds the hot spots of an average size protein in less than an hour. Since FTFlex performs mapping for all low energy conformers of side chains in the binding site, its completion time is proportionately longer. PMID:25855957
CDC WONDER: a cooperative processing architecture for public health.
Friede, A; Rosen, D H; Reid, J A
1994-01-01
CDC WONDER is an information management architecture designed for public health. It provides access to information and communications without the user's needing to know the location of data or communication pathways and mechanisms. CDC WONDER users have access to extractions from some 40 databases; electronic mail (e-mail); and surveillance data processing. System components include the Remote Client, the Communications Server, the Queue Managers, and Data Servers and Process Servers. The Remote Client software resides in the user's machine; other components are at the Centers for Disease Control and Prevention (CDC). The Remote Client, the Communications Server, and the Applications Server provide access to the information and functions in the Data Servers and Process Servers. The system architecture is based on cooperative processing, and components are coupled via pure message passing, using several protocols. This architecture allows flexibility in the choice of hardware and software. One system limitation is that final results from some subsystems are obtained slowly. Although designed for public health, CDC WONDER could be useful for other disciplines that need flexible, integrated information exchange. PMID:7719813
Database construction for PromoterCAD: synthetic promoter design for mammals and plants.
Nishikata, Koro; Cox, Robert Sidney; Shimoyama, Sayoko; Yoshida, Yuko; Matsui, Minami; Makita, Yuko; Toyoda, Tetsuro
2014-03-21
Synthetic promoters can control a gene's timing, location, and expression level. The PromoterCAD web server ( http://promotercad.org ) allows the design of synthetic promoters to control plant gene expression, by novel arrangement of cis-regulatory elements. Recently, we have expanded PromoterCAD's scope with additional plant and animal data: (1) PLACE (Plant Cis-acting Regulatory DNA Elements), including various sized sequence motifs; (2) PEDB (Mammalian Promoter/Enhancer Database), including gene expression data for mammalian tissues. The plant PromoterCAD data now contains 22 000 Arabidopsis thaliana genes, 2 200 000 microarray measurements in 20 growth conditions and 79 tissue organs and developmental stages, while the new mammalian PromoterCAD data contains 679 Mus musculus genes and 65 000 microarray measurements in 96 tissue organs and cell types ( http://promotercad.org/mammal/ ). This work presents step-by-step instructions for adding both regulatory motif and gene expression data to PromoterCAD, to illustrate how users can expand PromoterCAD functionality for their own applications and organisms.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
... Corporation Including Express Employment Professionals. 74,111 Alstom Transportation, Hornell, NY May 14, 2009... Serv., Server Systems, IC1, Storage, Backup. 74,316A International Business Cambridge, MA......... June 10, 2009. Machines (IBM), Global Tech Serv., Server Systems, IC1, Storage, Backup. 74,316B...
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
RNAiFold: a web server for RNA inverse folding and molecular design.
Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan
2013-07-01
Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.
GSCALite: A Web Server for Gene Set Cancer Analysis.
Liu, Chun-Jie; Hu, Fei-Fei; Xia, Mengxuan; Han, Leng; Zhang, Qiong; Guo, An-Yuan
2018-05-22
The availability of cancer genomic data makes it possible to analyze genes related to cancer. Cancer is usually the result of a set of genes and the signal of a single gene could be covered by background noise. Here, we present a web server named Gene Set Cancer Analysis (GSCALite) to analyze a set of genes in cancers with the following functional modules. (i) Differential expression in tumor vs normal, and the survival analysis; (ii) Genomic variations and their survival analysis; (iii) Gene expression associated cancer pathway activity; (iv) miRNA regulatory network for genes; (v) Drug sensitivity for genes; (vi) Normal tissue expression and eQTL for genes. GSCALite is a user-friendly web server for dynamic analysis and visualization of gene set in cancer and drug sensitivity correlation, which will be of broad utilities to cancer researchers. GSCALite is available on http://bioinfo.life.hust.edu.cn/web/GSCALite/. guoay@hust.edu.cn or zhangqiong@hust.edu.cn. Supplementary data are available at Bioinformatics online.
Designing Secure Library Networks.
ERIC Educational Resources Information Center
Breeding, Michael
1997-01-01
Focuses on designing a library network to maximize security. Discusses UNIX and file servers; connectivity to campus, corporate networks and the Internet; separation of staff from public servers; controlling traffic; the threat of network sniffers; hubs that eliminate eavesdropping; dividing the network into subnets; Switched Ethernet;…
Designing communication and remote controlling of virtual instrument network system
NASA Astrophysics Data System (ADS)
Lei, Lin; Wang, Houjun; Zhou, Xue; Zhou, Wenjian
2005-01-01
In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful.
2002-06-01
Student memo for personnel MCLLS . . . . . . . . . . . . . . 75 i. Migrate data to SQL Server...The Web Server is on the same server as the SWORD database in the current version. 4: results set 5: dynamic HTML page 6: dynamic HTML page 3: SQL ...still be supported by Access. SQL Server would be a more viable tool for a fully developed application based on the number of potential users and
Design of Grid Portal System Based on RIA
NASA Astrophysics Data System (ADS)
Cao, Caifeng; Luo, Jianguo; Qiu, Zhixin
Grid portal is an important branch of grid research. In order to solve the weak expressive force, the poor interaction, the low operating efficiency and other insufficiencies of the first and second generation of grid portal system, RIA technology was introduced to it. A new portal architecture was designed based on RIA and Web service. The concrete realizing scheme of portal system was presented by using Adobe Flex/Flash technology, which formed a new design pattern. In system architecture, the design pattern has B/S and C/S superiorities, balances server and its client side, optimizes the system performance, realizes platform irrelevance. In system function, the design pattern realizes grid service call, provides client interface with rich user experience, integrates local resources by using FABridge, LCDS, Flash player and some other components.
Design and Delivery of Multiple Server-Side Computer Languages Course
ERIC Educational Resources Information Center
Wang, Shouhong; Wang, Hai
2011-01-01
Given the emergence of service-oriented architecture, IS students need to be knowledgeable of multiple server-side computer programming languages to be able to meet the needs of the job market. This paper outlines the pedagogy of an innovative course of multiple server-side computer languages for the undergraduate IS majors. The paper discusses…
Designing indonesian teacher engagement index (itei) applications based on android
NASA Astrophysics Data System (ADS)
Manalu, S. R.; Sasmoko; Permai, S. D.; Widhoyoko, S. A.; Indrianti, Y.
2018-03-01
Teachers who have a good level of engagement will be able to produce students who engage and excel. Level of national teachers’ engagement needs to be a reference to the level of educational success and equity of national education. The spread of geographically inaccessible Indonesian teachers is a barrier to these measurements. ITEI Android application developed by analysing the geographical problem, so that each teacher can participate wherever they are. The ITEI app is designed by implementing Android on the client side and load balancer on the server side. Android ITEI will feature a number of questions questionnaire to teachers. Meanwhile, the load balancer will distribute the answers to each server for processing. Load Balancer ensures fast data processing and minimize server failure. The results of the processing on the server will be sent back to Android in the form of profiling themselves ITEI teachers. While the data obtained and stored in the server can be used to measure the level of national teachers’ engagement. The result of this research is the design of ITEI application ready to be implemented in order to support the data collection process of teacher national engagement level.
INFO-RNA--a server for fast inverse RNA folding satisfying sequence constraints.
Busch, Anke; Backofen, Rolf
2007-07-01
INFO-RNA is a new web server for designing RNA sequences that fold into a user given secondary structure. Furthermore, constraints on the sequence can be specified, e.g. one can restrict sequence positions to a fixed nucleotide or to a set of nucleotides. Moreover, the user can allow violations of the constraints at some positions, which can be advantageous in complicated cases. The INFO-RNA web server allows biologists to design RNA sequences in an automatic manner. It is clearly and intuitively arranged and easy to use. The procedure is fast, as most applications are completed within seconds and it proceeds better and faster than other existing tools. The INFO-RNA web server is freely available at http://www.bioinf.uni-freiburg.de/Software/INFO-RNA/
INFO-RNA—a server for fast inverse RNA folding satisfying sequence constraints
Busch, Anke; Backofen, Rolf
2007-01-01
INFO-RNA is a new web server for designing RNA sequences that fold into a user given secondary structure. Furthermore, constraints on the sequence can be specified, e.g. one can restrict sequence positions to a fixed nucleotide or to a set of nucleotides. Moreover, the user can allow violations of the constraints at some positions, which can be advantageous in complicated cases. The INFO-RNA web server allows biologists to design RNA sequences in an automatic manner. It is clearly and intuitively arranged and easy to use. The procedure is fast, as most applications are completed within seconds and it proceeds better and faster than other existing tools. The INFO-RNA web server is freely available at http://www.bioinf.uni-freiburg.de/Software/INFO-RNA/ PMID:17452349
A Web Terminology Server Using UMLS for the Description of Medical Procedures
Burgun, Anita; Denier, Patrick; Bodenreider, Olivier; Botti, Geneviève; Delamarre, Denis; Pouliquen, Bruno; Oberlin, Philippe; Lévéque, Jean M.; Lukacs, Bertrand; Kohler, François; Fieschi, Marius; Le Beux, Pierre
1997-01-01
Abstract The Model for Assistance in the Orientation of a User within Coding Systems (MAOUSSC) project has been designed to provide a representation for medical and surgical procedures that allows several applications to be developed from several viewpoints. It is based on a conceptual model, a controlled set of terms, and Web server development. The design includes the UMLS knowledge sources associated with additional knowledge about medico-surgical procedures. The model was implemented using a relational database. The authors developed a complete interface for the Web presentation, with the intermediary layer being written in PERL. The server has been used for the representation of medico-surgical procedures that occur in the discharge summaries of the national survey of hospital activities that is performed by the French Health Statistics Agency in order to produce inpatient profiles. The authors describe the current status of the MAOUSSC server and discuss their interest in using such a server to assist in the coordination of terminology tasks and in the sharing of controlled terminologies. PMID:9292841
The DICOM-based radiation therapy information system
NASA Astrophysics Data System (ADS)
Law, Maria Y. Y.; Chan, Lawrence W. C.; Zhang, Xiaoyan; Zhang, Jianguo
2004-04-01
Similar to DICOM for PACS (Picture Archiving and Communication System), standards for radiotherapy (RT) information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions), which are more than just images. This presentation describes how a DICOM-based RT Information System Server can be built based on the PACS technology and its data model for a web-based distribution. Methods: The RT information System consists of a Modality Simulator, a data format translator, a RT Gateway, the DICOM RT Server, and the Web-based Application Server. The DICOM RT Server was designed based on a PACS data model and was connected to a Web application Server for distribution of the RT information including therapeutic plans, structures, dose distribution, images and records. Various DICOM RT objects of the patient transmitted to the RT Server were routed to the Web Application Server where the contents of the DICOM RT objects were decoded and mapped to the corresponding location of the RT data model for display in the specially-designed Graphic User Interface. The non-DICOM objects were first rendered to DICOM RT Objects in the translator before they were sent to the RT Server. Results: Ten clinical cases have been collected from different hopsitals for evaluation of the DICOM-based RT Information System. They were successfully routed through the data flow and displayed in the client workstation of the RT information System. Conclusion: Using the DICOM-RT standards, integration of RT data from different vendors is possible.
DiseaseConnect: a comprehensive web server for mechanism-based disease–disease connections
Liu, Chun-Chi; Tseng, Yu-Ting; Li, Wenyuan; Wu, Chia-Yu; Mayzus, Ilya; Rzhetsky, Andrey; Sun, Fengzhu; Waterman, Michael; Chen, Jeremy J. W.; Chaudhary, Preet M.; Loscalzo, Joseph; Crandall, Edward; Zhou, Xianghong Jasmine
2014-01-01
The DiseaseConnect (http://disease-connect.org) is a web server for analysis and visualization of a comprehensive knowledge on mechanism-based disease connectivity. The traditional disease classification system groups diseases with similar clinical symptoms and phenotypic traits. Thus, diseases with entirely different pathologies could be grouped together, leading to a similar treatment design. Such problems could be avoided if diseases were classified based on their molecular mechanisms. Connecting diseases with similar pathological mechanisms could inspire novel strategies on the effective repositioning of existing drugs and therapies. Although there have been several studies attempting to generate disease connectivity networks, they have not yet utilized the enormous and rapidly growing public repositories of disease-related omics data and literature, two primary resources capable of providing insights into disease connections at an unprecedented level of detail. Our DiseaseConnect, the first public web server, integrates comprehensive omics and literature data, including a large amount of gene expression data, Genome-Wide Association Studies catalog, and text-mined knowledge, to discover disease–disease connectivity via common molecular mechanisms. Moreover, the clinical comorbidity data and a comprehensive compilation of known drug–disease relationships are additionally utilized for advancing the understanding of the disease landscape and for facilitating the mechanism-based development of new drug treatments. PMID:24895436
Reiling, Denise M; Nusbaumer, Michael R
2007-12-01
Much has been written about the impact of the presence of a designated driver on patrons' consumption, but heretofore, its impact on the behaviour of the server has been virtually ignored. The goal of this paper, then, was to explore the potential impact of the presence of a designated driver on alcoholic beverage servers' self-reported willingness to knowingly serve an already intoxicated customer. chi(2) analysis of survey data collected from 938 licensed servers, in the state of Indiana, USA, was performed. Approximately 43% of the bartenders surveyed reported that they either would be or might be willing to over-serve an already intoxicated customer. Of those who answered the follow-up question as to under what conditions they would be willing to over-serve, almost 80% reported that they would do so if the patron were accompanied by a designated driver. The statistical significance of the relationship between these two variables (.000) raises the question of whether the Designated Driver Campaign has the latent function of enabling some servers to neutralize their responsibility for over-serving by disregarding other types of intoxication-related harm.
Interfacing a high performance disk array file server to a Gigabit LAN
NASA Technical Reports Server (NTRS)
Seshan, Srinivasan; Katz, Randy H.
1993-01-01
Our previous prototype, RAID-1, identified several bottlenecks in typical file server architectures. The most important bottleneck was the lack of a high-bandwidth path between disk, memory, and the network. Workstation servers, such as the Sun-4/280, have very slow access to peripherals on busses far from the CPU. For the RAID-2 system, we addressed this problem by designing a crossbar interconnect, Xbus board, that provides a 40MB/s path between disk, memory, and the network interfaces. However, this interconnect does not provide the system CPU with low latency access to control the various interfaces. To provide a high data rate to clients on the network, we were forced to carefully and efficiently design the network software. A block diagram of the system hardware architecture is given. In the following subsections, we describe pieces of the RAID-2 file server hardware that had a significant impact on the design of the network interface.
Yu, Kaijun
2010-07-01
This paper Analys the design goals of Medical Instrumentation standard information retrieval system. Based on the B /S structure,we established a medical instrumentation standard retrieval system with ASP.NET C # programming language, IIS f Web server, SQL Server 2000 database, in the. NET environment. The paper also Introduces the system structure, retrieval system modules, system development environment and detailed design of the system.
Secure Server Login by Using Third Party and Chaotic System
NASA Astrophysics Data System (ADS)
Abdulatif, Firas A.; zuhiar, Maan
2018-05-01
Server is popular among all companies and it used by most of them but due to the security threat on the server make this companies are concerned when using it so that in this paper we will design a secure system based on one time password and third parity authentication (smart phone). The proposed system make security to the login process of server by using one time password to authenticate person how have permission to login and third parity device (smart phone) as other level of security.
NASA Astrophysics Data System (ADS)
Shahzad, Muhammad A.
1999-02-01
With the emergence of data warehousing, Decision support systems have evolved to its best. At the core of these warehousing systems lies a good database management system. Database server, used for data warehousing, is responsible for providing robust data management, scalability, high performance query processing and integration with other servers. Oracle being the initiator in warehousing servers, provides a wide range of features for facilitating data warehousing. This paper is designed to review the features of data warehousing - conceptualizing the concept of data warehousing and, lastly, features of Oracle servers for implementing a data warehouse.
minepath.org: a free interactive pathway analysis web server.
Koumakis, Lefteris; Roussos, Panos; Potamias, George
2017-07-03
( www.minepath.org ) is a web-based platform that elaborates on, and radically extends the identification of differentially expressed sub-paths in molecular pathways. Besides the network topology, the underlying MinePath algorithmic processes exploit exact gene-gene molecular relationships (e.g. activation, inhibition) and are able to identify differentially expressed pathway parts. Each pathway is decomposed into all its constituent sub-paths, which in turn are matched with corresponding gene expression profiles. The highly ranked, and phenotype inclined sub-paths are kept. Apart from the pathway analysis algorithm, the fundamental innovation of the MinePath web-server concerns its advanced visualization and interactive capabilities. To our knowledge, this is the first pathway analysis server that introduces and offers visualization of the underlying and active pathway regulatory mechanisms instead of genes. Other features include live interaction, immediate visualization of functional sub-paths per phenotype and dynamic linked annotations for the engaged genes and molecular relations. The user can download not only the results but also the corresponding web viewer framework of the performed analysis. This feature provides the flexibility to immediately publish results without publishing source/expression data, and get all the functionality of a web based pathway analysis viewer. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Client - server programs analysis in the EPOCA environment
NASA Astrophysics Data System (ADS)
Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano
1996-09-01
Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.
Koczyk, Grzegorz; Berezovsky, Igor N.
2008-01-01
Domain hierarchy and closed loops (DHcL) (http://sitron.bccs.uib.no/dhcl/) is a web server that delineates energy hierarchy of protein domain structure and detects domains at different levels of this hierarchy. The server also identifies closed loops and van der Waals locks, which constitute a structural basis for the protein domain hierarchy. The DHcL can be a useful tool for an express analysis of protein structures and their alternative domain decompositions. The user submits a PDB identifier(s) or uploads a 3D protein structure in a PDB format. The results of the analysis are the location of domains at different levels of hierarchy, closed loops, van der Waals locks and their interactive visualization. The server maintains a regularly updated database of domains, closed loop and van der Waals locks for all X-ray structures in PDB. DHcL server is available at: http://sitron.bccs.uib.no/dhcl. PMID:18502776
Development and process evaluation of a Web-based responsible beverage service training program.
Danaher, Brian G; Dresser, Jack; Shaw, Tracy; Severson, Herbert H; Tyler, Milagra S; Maxwell, Elisabeth D; Christiansen, Steve M
2012-09-22
Responsible beverage service (RBS) training designed to improve the appropriate service of alcohol in commercial establishments is typically delivered in workshops. Recently, Web-based RBS training programs have emerged. This report describes the formative development and subsequent design of an innovative Web-delivered RBS program, and evaluation of the impact of the program on servers' knowledge, attitudes, and self-efficacy. Formative procedures using focus groups and usability testing were used to develop a Web-based RBS training program. Professional alcohol servers (N = 112) who worked as servers and/or mangers in alcohol service settings were recruited to participate. A pre-post assessment design was used to assess changes associated with using the program. Participants who used the program showed significant improvements in their RBS knowledge, attitudes, and self-efficacy. Although the current study did not directly observe and determine impact of the intervention on server behaviors, it demonstrated that the development process incorporating input from a multidisciplinary team in conjunction with feedback from end-users resulted in creation of a Web-based RBS program that was well-received by servers and that changed relevant knowledge, attitudes, and self-efficacy. The results also help to establish a needed evidence base in support of the use of online RBS training, which has been afforded little research attention.
16 CFR 803.10 - Running of time.
Code of Federal Regulations, 2011 CFR
2011-01-01
... effected to the server maintained by the FTC for the purpose of receiving electronic filings. (iii) For... server or the date on which delivery of the attachments is effected to the designated offices as provided...
NASA Astrophysics Data System (ADS)
Saleh, T.; Rico, H.; Solanki, K.; Hauksson, E.; Friberg, P.
2005-12-01
The Southern California Seismic Network (SCSN) handles more than 2500 high-data rate channels from more than 380 seismic stations distributed across southern California. These data are imported real-time from dataloggers, earthworm hubs, and partner networks. The SCSN also exports data to eight different partner networks. Both the imported and exported data are critical for emergency response and scientific research. Previous data acquisition systems were complex and difficult to operate, because they grew in an ad hoc fashion to meet the increasing needs for distributing real-time waveform data. To maximize reliability and redundancy, we apply best practices methods from computer science for implementing the software and hardware configurations for import, export, and acquisition of real-time seismic data. Our approach makes use of failover software designs, methods for dividing labor diligently amongst the network nodes, and state of the art networking redundancy technologies. To facilitate maintenance and daily operations we seek to provide some separation between major functions such as data import, export, acquisition, archiving, real-time processing, and alarming. As an example, we make waveform import and export functions independent by operating them on separate servers. Similarly, two independent servers provide waveform export, allowing data recipients to implement their own redundancy. The data import is handled differently by using one primary server and a live backup server. These data import servers, run fail-over software that allows automatic role switching in case of failure from primary to shadow. Similar to the classic earthworm design, all the acquired waveform data are broadcast onto a private network, which allows multiple machines to acquire and process the data. As we separate data import and export away from acquisition, we are also working on new approaches to separate real-time processing and rapid reliable archiving of real-time data. Further, improved network security is an integral part of the new design. Redundant firewalls will provide secure data imports, exports, and acquisition as well as DMZ zones for web servers and other publicly available servers. We will present the detailed design of this new configuration that is currently being implemented by the SCSN at Caltech. The design principals are general enough to be of use to most regional seismic networks.
NASA Astrophysics Data System (ADS)
Kerley, Dan; Smith, Malcolm; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi
2016-08-01
The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR).
Rapid HIS, RIS, PACS Integration Using Graphical CASE Tools
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Breant, Claudine M.; Stepczyk, Frank M.; Kho, Hwa T.; Valentino, Daniel J.; Tashima, Gregory H.; Materna, Anthony T.
1994-05-01
We describe the clinical requirements of the integrated federation of databases and present our client-mediator-server design. The main body of the paper describes five important aspects of integrating information systems: (1) global schema design, (2) establishing sessions with remote database servers, (3) development of schema translators, (4) integration of global system triggers, and (5) development of job workflow scripts.
2011-01-01
Background Multiple types of assays allow sensitive detection of virus-specific neutralizing antibodies. For example, the extent of antibody neutralization of HIV-1, SIV and SHIV can be measured in the TZM-bl cell line through the degree of luciferase reporter gene expression after infection. In the past, neutralization curves and titers for this standard assay have been calculated using an Excel macro. Updating all instances of such a macro with new techniques can be unwieldy and introduce non-uniformity across multi-lab teams. Using Excel also poses challenges in centrally storing, sharing and associating raw data files and results. Results We present LabKey Server's NAb tool for organizing, analyzing and securely sharing data, files and results for neutralizing antibody (NAb) assays, including the luciferase-based TZM-bl NAb assay. The customizable tool supports high-throughput experiments and includes a graphical plate template designer, allowing researchers to quickly adapt calculations to new plate layouts. The tool calculates the percent neutralization for each serum dilution based on luminescence measurements, fits a range of neutralization curves to titration results and uses these curves to estimate the neutralizing antibody titers for benchmark dilutions. Results, curve visualizations and raw data files are stored in a database and shared through a secure, web-based interface. NAb results can be integrated with other data sources based on sample identifiers. It is simple to make results public after publication by updating folder security settings. Conclusions Standardized tools for analyzing, archiving and sharing assay results can improve the reproducibility, comparability and reliability of results obtained across many labs. LabKey Server and its NAb tool are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. Many members of the HIV research community can also access the LabKey Server NAb tool without installing the software by using the Atlas Science Portal (https://atlas.scharp.org). Atlas is an installation of LabKey Server. PMID:21619655
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1991-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.
omiRas: a Web server for differential expression analysis of miRNAs derived from small RNA-Seq data.
Müller, Sören; Rycak, Lukas; Winter, Peter; Kahl, Günter; Koch, Ina; Rotter, Björn
2013-10-15
Small RNA deep sequencing is widely used to characterize non-coding RNAs (ncRNAs) differentially expressed between two conditions, e.g. healthy and diseased individuals and to reveal insights into molecular mechanisms underlying condition-specific phenotypic traits. The ncRNAome is composed of a multitude of RNAs, such as transfer RNA, small nucleolar RNA and microRNA (miRNA), to name few. Here we present omiRas, a Web server for the annotation, comparison and visualization of interaction networks of ncRNAs derived from next-generation sequencing experiments of two different conditions. The Web tool allows the user to submit raw sequencing data and results are presented as: (i) static annotation results including length distribution, mapping statistics, alignments and quantification tables for each library as well as lists of differentially expressed ncRNAs between conditions and (ii) an interactive network visualization of user-selected miRNAs and their target genes based on the combination of several miRNA-mRNA interaction databases. The omiRas Web server is implemented in Python, PostgreSQL, R and can be accessed at: http://tools.genxpro.net/omiras/.
An ECG storage and retrieval system embedded in client server HIS utilizing object-oriented DB.
Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S
1996-02-01
In the University of Tokyo Hospital, the improved client server HIS has been applied to clinical practice and physicians can order prescription, laboratory examination, ECG examination and radiographic examination, etc. directly by themselves and read results of these examinations, except medical signal waves, schema and image, on UNIX workstations. Recently, we designed and developed an ECG storage and retrieval system embedded in the client server HIS utilizing object-oriented database to take the first step in dealing with digitized signal, schema and image data and show waves, graphics, and images directly to physicians by the client server HIS. The system was developed based on object-oriented analysis and design, and implemented with object-oriented database management system (OODMS) and C++ programming language. In this paper, we describe the ECG data model, functions of the storage and retrieval system, features of user interface and the result of its implementation in the HIS.
Method for a dummy CD mirror server based on NAS
NASA Astrophysics Data System (ADS)
Tang, Muna; Pei, Jing
2002-09-01
With the development of computer network, information sharing is becoming the necessity in human life. The rapid development of CD-ROM and CD-ROM driver techniques makes it possible to issue large database online. After comparing many designs of dummy CD mirror database, which are the embodiment of a main product in CD-ROM database now and in near future, we proposed and realized a new PC based scheme. Our system has the following merits, such as, supporting all kinds of CD format; supporting many network protocol; the independence of mirror network server and the main server; low price, super large capacity, without the need of any special hardware. Preliminarily experiments have verified the validity of the proposed scheme. Encouraged by the promising application future, we are now preparing to put it into market. This paper discusses the design and implement of the CD-ROM server detailedly.
MODBUS APPLICATION AT JEFFERSON LAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Jianxun; Seaton, Chad; Philip, Sarin
Modbus is a client/server communication model. In our applications, the embedded Ethernet device XPort is designed as the server and a SoftIOC running EPICS Modbus is the client. The SoftIOC builds a Modbus request from parameter contained in a demand that is sent by the EPICS application to the Modbus Client interface. On reception of the Modbus request, the Modbus server activates a local action to read, write, or achieve some other action. So, the main Modbus server functions are to wait for a Modbus request on 502 TCP port, treat this request, and then build a Modbus response.
Development of a Mobile User Interface for Image-based Dietary Assessment.
Kim, Sungye; Schap, Tusarebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J; Ebert, David S; Boushey, Carol J
2010-12-31
In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records.
DyNAVacS: an integrative tool for optimized DNA vaccine design.
Harish, Nagarajan; Gupta, Rekha; Agarwal, Parul; Scaria, Vinod; Pillai, Beena
2006-07-01
DNA vaccines have slowly emerged as keystones in preventive immunology due to their versatility in inducing both cell-mediated as well as humoral immune responses. The design of an efficient DNA vaccine, involves choice of a suitable expression vector, ensuring optimal expression by codon optimization, engineering CpG motifs for enhancing immune responses and providing additional sequence signals for efficient translation. DyNAVacS is a web-based tool created for rapid and easy design of DNA vaccines. It follows a step-wise design flow, which guides the user through the various sequential steps in the design of the vaccine. Further, it allows restriction enzyme mapping, design of primers spanning user specified sequences and provides information regarding the vectors currently used for generation of DNA vaccines. The web version uses Apache HTTP server. The interface was written in HTML and utilizes the Common Gateway Interface scripts written in PERL for functionality. DyNAVacS is an integrated tool consisting of user-friendly programs, which require minimal information from the user. The software is available free of cost, as a web based application at URL: http://miracle.igib.res.in/dynavac/.
An Improvement to a Multi-Client Searchable Encryption Scheme for Boolean Queries.
Jiang, Han; Li, Xue; Xu, Qiuliang
2016-12-01
The migration of e-health systems to the cloud computing brings huge benefits, as same as some security risks. Searchable Encryption(SE) is a cryptography encryption scheme that can protect the confidentiality of data and utilize the encrypted data at the same time. The SE scheme proposed by Cash et al. in Crypto2013 and its follow-up work in CCS2013 are most practical SE Scheme that support Boolean queries at present. In their scheme, the data user has to generate the search tokens by the counter number one by one and interact with server repeatedly, until he meets the correct one, or goes through plenty of tokens to illustrate that there is no search result. In this paper, we make an improvement to their scheme. We allow server to send back some information and help the user to generate exact search token in the search phase. In our scheme, there are only two round interaction between server and user, and the search token has [Formula: see text] elements, where n is the keywords number in query expression, and [Formula: see text] is the minimum documents number that contains one of keyword in query expression, and the computation cost of server is [Formula: see text] modular exponentiation operation.
Sirocco Storage Server v. pre-alpha 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curry, Matthew L.; Danielson, Geoffrey; Ward, H. Lee
Sirocco is a parallel storage system under development, designed for write-intensive workloads on large-scale HPC platforms. It implements a keyvalue object store on top of a set of loosely federated storage servers that cooperate to ensure data integrity and performance. It includes support for a range of different types of storage transactions. This software release constitutes a conformant storage server, along with the client-side libraries to access the storage over a network.
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel andmore » one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a group is similar to all other components as a group. However, some differences were observed. The Supermicro server used 27 percent more power at idle compared to the other brands. The Intel server had a power supply control feature called cold redundancy, and the data suggest that cold redundancy can provide energy savings at low power levels. Test and evaluation methods that might be used by others having limited resources for IT equipment evaluation are explained in the report.« less
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1992-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.
NASA Astrophysics Data System (ADS)
Xu, Chong-Yao; Zheng, Xin; Xiong, Xiao-Ming
2017-02-01
With the development of Internet of Things (IoT) and the popularity of intelligent mobile terminals, smart home system has come into people’s vision. However, due to the high cost, complex installation and inconvenience, as well as network security issues, smart home system has not been popularized. In this paper, combined with Wi-Fi technology, Android system, cloud server and SSL security protocol, a new set of smart home system is designed, with low cost, easy operation, high security and stability. The system consists of Wi-Fi smart node (WSN), Android client and cloud server. In order to reduce system cost and complexity of the installation, each Wi-Fi transceiver, appliance control logic and data conversion in the WSN is setup by a single chip. In addition, all the data of the WSN can be uploaded to the server through the home router, without having to transit through the gateway. All the appliance status information and environmental information are preserved in the cloud server. Furthermore, to ensure the security of information, the Secure Sockets Layer (SSL) protocol is used in the WSN communication with the server. What’s more, to improve the comfort and simplify the operation, Android client is designed with room pattern to control home appliances more realistic, and more convenient.
Durack, Jeremy C.; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P.; Dev, Parvati
2002-01-01
Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research. PMID:12463820
Durack, Jeremy C; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P; Dev, Parvati
2002-01-01
Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research.
Maitra, Tanmoy; Giri, Debasis
2014-12-01
The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost.
Informatics in radiology (infoRAD): A complete continuous-availability PACS archive server.
Liu, Brent J; Huang, H K; Cao, Fei; Zhou, Michael Z; Zhang, Jianguo; Mogel, Greg
2004-01-01
The operational reliability of the picture archiving and communication system (PACS) server in a filmless hospital environment is always a major concern because server failure could cripple the entire PACS operation. A simple, low-cost, continuous-availability (CA) PACS archive server was designed and developed. The server makes use of a triple modular redundancy (TMR) system with a simple majority voting logic that automatically identifies a faulty module and removes it from service. The remaining two modules continue normal operation with no adverse effects on data flow or system performance. In addition, the server is integrated with two external mass storage devices for short- and long-term storage. Evaluation and testing of the server were conducted with laboratory experiments in which hardware failures were simulated to observe recovery time and the resumption of normal data flow. The server provides maximum uptime (99.999%) for end users while ensuring the transactional integrity of all clinical PACS data. Hardware failure has only minimal impact on performance, with no interruption of clinical data flow or loss of data. As hospital PACS become more widespread, the need for CA PACS solutions will increase. A TMR CA PACS archive server can reliably help achieve CA in this setting. Copyright RSNA, 2004
TAPIR, a web server for the prediction of plant microRNA targets, including target mimics.
Bonnet, Eric; He, Ying; Billiau, Kenny; Van de Peer, Yves
2010-06-15
We present a new web server called TAPIR, designed for the prediction of plant microRNA targets. The server offers the possibility to search for plant miRNA targets using a fast and a precise algorithm. The precise option is much slower but guarantees to find less perfectly paired miRNA-target duplexes. Furthermore, the precise option allows the prediction of target mimics, which are characterized by a miRNA-target duplex having a large loop, making them undetectable by traditional tools. The TAPIR web server can be accessed at: http://bioinformatics.psb.ugent.be/webtools/tapir. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Rao, Hanumantha; Kumar, Vasanta; Srinivasa Rao, T.; Srinivasa Kumar, B.
2018-04-01
In this paper, we examine a two-stage queueing system where the arrivals are Poisson with rate depends on the condition of the server to be specific: vacation, pre-service, operational or breakdown state. The service station is liable to breakdowns and deferral in repair because of non-accessibility of the repair facility. The service is in two basic stages, the first being bulk service to every one of the customers holding up on the line and the second stage is individual to each of them. The server works under N-policy. The server needs preliminary time (startup time) to begin batch service after a vacation period. Startup times, uninterrupted service times, the length of each vacation period, delay times and service times follows an exponential distribution. The closed form of expressions for the mean system size at different conditions of the server is determined. Numerical investigations are directed to concentrate the impact of the system parameters on the ideal limit N and the minimum base expected unit cost.
RiboMaker: computational design of conformation-based riboregulation.
Rodrigo, Guillermo; Jaramillo, Alfonso
2014-09-01
The ability to engineer control systems of gene expression is instrumental for synthetic biology. Thus, bioinformatic methods that assist such engineering are appealing because they can guide the sequence design and prevent costly experimental screening. In particular, RNA is an ideal substrate to de novo design regulators of protein expression by following sequence-to-function models. We have implemented a novel algorithm, RiboMaker, aimed at the computational, automated design of bacterial riboregulation. RiboMaker reads the sequence and structure specifications, which codify for a gene regulatory behaviour, and optimizes the sequences of a small regulatory RNA and a 5'-untranslated region for an efficient intermolecular interaction. To this end, it implements an evolutionary design strategy, where random mutations are selected according to a physicochemical model based on free energies. The resulting sequences can then be tested experimentally, providing a new tool for synthetic biology, and also for investigating the riboregulation principles in natural systems. Web server is available at http://ribomaker.jaramillolab.org/. Source code, instructions and examples are freely available for download at http://sourceforge.net/projects/ribomaker/. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ProteMiner-SSM: a web server for efficient analysis of similar protein tertiary substructures.
Chang, Darby Tien-Hau; Chen, Chien-Yu; Chung, Wen-Chin; Oyang, Yen-Jen; Juan, Hsueh-Fen; Huang, Hsuan-Cheng
2004-07-01
Analysis of protein-ligand interactions is a fundamental issue in drug design. As the detailed and accurate analysis of protein-ligand interactions involves calculation of binding free energy based on thermodynamics and even quantum mechanics, which is highly expensive in terms of computing time, conformational and structural analysis of proteins and ligands has been widely employed as a screening process in computer-aided drug design. In this paper, a web server called ProteMiner-SSM designed for efficient analysis of similar protein tertiary substructures is presented. In one experiment reported in this paper, the web server has been exploited to obtain some clues about a biochemical hypothesis. The main distinction in the software design of the web server is the filtering process incorporated to expedite the analysis. The filtering process extracts the residues located in the caves of the protein tertiary structure for analysis and operates with O(nlogn) time complexity, where n is the number of residues in the protein. In comparison, the alpha-hull algorithm, which is a widely used algorithm in computer graphics for identifying those instances that are on the contour of a three-dimensional object, features O(n2) time complexity. Experimental results show that the filtering process presented in this paper is able to speed up the analysis by a factor ranging from 3.15 to 9.37 times. The ProteMiner-SSM web server can be found at http://proteminer.csie.ntu.edu.tw/. There is a mirror site at http://p4.sbl.bc.sinica.edu.tw/proteminer/.
Designing and Implementation of River Classification Assistant Management System
NASA Astrophysics Data System (ADS)
Zhao, Yinjun; Jiang, Wenyuan; Yang, Rujun; Yang, Nan; Liu, Haiyan
2018-03-01
In an earlier publication, we proposed a new Decision Classifier (DCF) for Chinese river classification based on their structures. To expand, enhance and promote the application of the DCF, we build a computer system to support river classification named River Classification Assistant Management System. Based on ArcEngine and ArcServer platform, this system implements many functions such as data management, extraction of river network, river classification, and results publication under combining Client / Server with Browser / Server framework.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-06
... Denver, Colorado. Communication Manager is designed to run on a variety of Linux-based media servers.... Some servers are in the form of blades. These are cards (similar to printed circuit cards with...
Identifying patients for clinical trials using fuzzy ternary logic expressions on HL7 messages.
Majeed, Raphael W; Röhrig, Rainer
2011-01-01
Identifying eligible patients is one of the most critical parts of any clinical trial. The process of recruiting patients for the third phase of any clinical trial is usually done manually, informing relevant physicians or putting notes on bulletin boards. While most necessary information is already available in electronic hospital information systems, required data still has to be looked up individually. Most university hospitals make use of a dedicated communication server to distribute information from independent information systems, e.g. laboratory information systems, electronic health records, surgery planning systems. Thus, a theoretical model is developed to formally describe inclusion and exclusion criteria for each clinical trial using a fuzzy ternary logic expression. These expressions will then be used to process HL7 messages from a communication server in order to identify eligible patients.
Senter, Evan; Sheikh, Saad; Dotu, Ivan; Ponty, Yann; Clote, Peter
2012-01-01
Using complex roots of unity and the Fast Fourier Transform, we design a new thermodynamics-based algorithm, FFTbor, that computes the Boltzmann probability that secondary structures differ by base pairs from an arbitrary initial structure of a given RNA sequence. The algorithm, which runs in quartic time and quadratic space , is used to determine the correlation between kinetic folding speed and the ruggedness of the energy landscape, and to predict the location of riboswitch expression platform candidates. A web server is available at http://bioinformatics.bc.edu/clotelab/FFTbor/. PMID:23284639
A Fast Healthcare Interoperability Resources (FHIR) layer implemented over i2b2.
Boussadi, Abdelali; Zapletal, Eric
2017-08-14
Standards and technical specifications have been developed to define how the information contained in Electronic Health Records (EHRs) should be structured, semantically described, and communicated. Current trends rely on differentiating the representation of data instances from the definition of clinical information models. The dual model approach, which combines a reference model (RM) and a clinical information model (CIM), sets in practice this software design pattern. The most recent initiative, proposed by HL7, is called Fast Health Interoperability Resources (FHIR). The aim of our study was to investigate the feasibility of applying the FHIR standard to modeling and exposing EHR data of the Georges Pompidou European Hospital (HEGP) integrating biology and the bedside (i2b2) clinical data warehouse (CDW). We implemented a FHIR server over i2b2 to expose EHR data in relation with five FHIR resources: DiagnosisReport, MedicationOrder, Patient, Encounter, and Medication. The architecture of the server combines a Data Access Object design pattern and FHIR resource providers, implemented using the Java HAPI FHIR API. Two types of queries were tested: query type #1 requests the server to display DiagnosticReport resources, for which the diagnosis code is equal to a given ICD-10 code. A total of 80 DiagnosticReport resources, corresponding to 36 patients, were displayed. Query type #2, requests the server to display MedicationOrder, for which the FHIR Medication identification code is equal to a given code expressed in a French coding system. A total of 503 MedicationOrder resources, corresponding to 290 patients, were displayed. Results were validated by manually comparing the results of each request to the results displayed by an ad-hoc SQL query. We showed the feasibility of implementing a Java layer over the i2b2 database model to expose data of the CDW as a set of FHIR resources. An important part of this work was the structural and semantic mapping between the i2b2 model and the FHIR RM. To accomplish this, developers must manually browse the specifications of the FHIR standard. Our source code is freely available and can be adapted for use in other i2b2 sites.
MOD Tool (Microwave Optics Design Tool)
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Borgioli, Andrea; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.
1999-01-01
The Jet Propulsion Laboratory (JPL) is currently designing and building a number of instruments that operate in the microwave and millimeter-wave bands. These include MIRO (Microwave Instrument for the Rosetta Orbiter), MLS (Microwave Limb Sounder), and IMAS (Integrated Multispectral Atmospheric Sounder). These instruments must be designed and built to meet key design criteria (e.g., beamwidth, gain, pointing) obtained from the scientific goals for the instrument. These criteria are frequently functions of the operating environment (both thermal and mechanical). To design and build instruments which meet these criteria, it is essential to be able to model the instrument in its environments. Currently, a number of modeling tools exist. Commonly used tools at JPL include: FEMAP (meshing), NASTRAN (structural modeling), TRASYS and SINDA (thermal modeling), MACOS/IMOS (optical modeling), and POPO (physical optics modeling). Each of these tools is used by an analyst, who models the instrument in one discipline. The analyst then provides the results of this modeling to another analyst, who continues the overall modeling in another discipline. There is a large reengineering task in place at JPL to automate and speed-up the structural and thermal modeling disciplines, which does not include MOD Tool. The focus of MOD Tool (and of this paper) is in the fields unique to microwave and millimeter-wave instrument design. These include initial design and analysis of the instrument without thermal or structural loads, the automation of the transfer of this design to a high-end CAD tool, and the analysis of the structurally deformed instrument (due to structural and/or thermal loads). MOD Tool is a distributed tool, with a database of design information residing on a server, physical optics analysis being performed on a variety of supercomputer platforms, and a graphical user interface (GUI) residing on the user's desktop computer. The MOD Tool client is being developed using Tcl/Tk, which allows the user to work on a choice of platforms (PC, Mac, or Unix) after downloading the Tcl/Tk binary, which is readily available on the web. The MOD Tool server is written using Expect, and it resides on a Sun workstation. Client/server communications are performed over a socket, where upon a connection from a client to the server, the server spawns a child which is be dedicated to communicating with that client. The server communicates with other machines, such as supercomputers using Expect with the username and password being provided by the user on the client.
WEB-server for search of a periodicity in amino acid and nucleotide sequences
NASA Astrophysics Data System (ADS)
E Frenkel, F.; Skryabin, K. G.; Korotkov, E. V.
2017-12-01
A new web server (http://victoria.biengi.ac.ru/splinter/login.php) was designed and developed to search for periodicity in nucleotide and amino acid sequences. The web server operation is based upon a new mathematical method of searching for multiple alignments, which is founded on the position weight matrices optimization, as well as on implementation of the two-dimensional dynamic programming. This approach allows the construction of multiple alignments of the indistinctly similar amino acid and nucleotide sequences that accumulated more than 1.5 substitutions per a single amino acid or a nucleotide without performing the sequences paired comparisons. The article examines the principles of the web server operation and two examples of studying amino acid and nucleotide sequences, as well as information that could be obtained using the web server.
Remote gaming on resource-constrained devices
NASA Astrophysics Data System (ADS)
Reza, Waazim; Kalva, Hari; Kaufman, Richard
2010-08-01
Games have become important applications on mobile devices. A mobile gaming approach known as remote gaming is being developed to support games on low cost mobile devices. In the remote gaming approach, the responsibility of rendering a game and advancing the game play is put on remote servers instead of the resource constrained mobile devices. The games rendered on the servers are encoded as video and streamed to mobile devices. Mobile devices gather user input and stream the commands back to the servers to advance game play. With this solution, mobile devices with video playback and network connectivity can become game consoles. In this paper we present the design and development of such a system and evaluate the performance and design considerations to maximize the end user gaming experience.
Development of a Mobile User Interface for Image-based Dietary Assessment
Kim, SungYe; Schap, TusaRebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J.; Ebert, David S.; Boushey, Carol J.
2011-01-01
In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records. PMID:24455755
Carroll, A E; Saluja, S; Tarczy-Hornoch, P
2001-01-01
Personal Digital Assistants (PDAs) offer clinicians the ability to enter and manage critical information at the point of care. Although PDAs have always been designed to be intuitive and easy to use, recent advances in technology have made them even more accessible. The ability to link data on a PDA (client) to a central database (server) allows for near-unlimited potential in developing point of care applications and systems for patient data management. Although many stand-alone systems exist for PDAs, none are designed to work in an integrated client/server environment. This paper describes the design, software and hardware selection, and preliminary testing of a PDA based patient data and charting system for use in the University of Washington Neonatal Intensive Care Unit (NICU). This system will be the subject of a subsequent study to determine its impact on patient outcomes and clinician efficiency.
VizPrimer: a web server for visualized PCR primer design based on known gene structure.
Zhou, Yang; Qu, Wubin; Lu, Yiming; Zhang, Yanchun; Wang, Xiaolei; Zhao, Dongsheng; Yang, Yi; Zhang, Chenggang
2011-12-15
The visualization of gene structure plays an important role in polymerase chain reaction (PCR) primer design, especially for eukaryotic genes with a number of splice variants that users need to distinguish between via PCR. Here, we describe a visualized web server for primer design named VizPrimer. It utilizes the new information technology (IT) tools, HTML5 to display gene structure and JavaScript to interact with the users. In VizPrimer, the users can focus their attention on the gene structure and primer design strategy, without wasting time calculating the exon positions of splice variants or manually configuring complicated parameters. In addition, VizPrimer is also suitable for the design of PCR primers for amplifying open reading frames and detecting single nucleotide polymorphisms (SNPs). VizPrimer is freely available at http://biocompute.bmi.ac.cn/CZlab/VizPrimer/. The web server supported browsers: Chrome (≥5.0), Firefox (≥3.0), Safari (≥4.0) and Opera (≥10.0). zhangcg@bmi.ac.cn; yangyi528@vip.sina.com.
DOT National Transportation Integrated Search
1987-05-01
This report describes a program of server education designed to foster the responsible service of alcohol in bars, restaurants, and other on-sale establishments. The program is administered in two phases. The first phase, three hours in length, is in...
Designing a scalable video-on-demand server with data sharing
NASA Astrophysics Data System (ADS)
Lim, Hyeran; Du, David H.
2000-12-01
As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.
Designing a scalable video-on-demand server with data sharing
NASA Astrophysics Data System (ADS)
Lim, Hyeran; Du, David H. C.
2001-01-01
As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.
Zebra: A striped network file system
NASA Technical Reports Server (NTRS)
Hartman, John H.; Ousterhout, John K.
1992-01-01
The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.
MCTBI: a web server for predicting metal ion effects in RNA structures.
Sun, Li-Zhen; Zhang, Jing-Xiang; Chen, Shi-Jie
2017-08-01
Metal ions play critical roles in RNA structure and function. However, web servers and software packages for predicting ion effects in RNA structures are notably scarce. Furthermore, the existing web servers and software packages mainly neglect ion correlation and fluctuation effects, which are potentially important for RNAs. We here report a new web server, the MCTBI server (http://rna.physics.missouri.edu/MCTBI), for the prediction of ion effects for RNA structures. This server is based on the recently developed MCTBI, a model that can account for ion correlation and fluctuation effects for nucleic acid structures and can provide improved predictions for the effects of metal ions, especially for multivalent ions such as Mg 2+ effects, as shown by extensive theory-experiment test results. The MCTBI web server predicts metal ion binding fractions, the most probable bound ion distribution, the electrostatic free energy of the system, and the free energy components. The results provide mechanistic insights into the role of metal ions in RNA structure formation and folding stability, which is important for understanding RNA functions and the rational design of RNA structures. © 2017 Sun et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
DICOM-compliant PACS with CD-based image archival
NASA Astrophysics Data System (ADS)
Cox, Robert D.; Henri, Christopher J.; Rubin, Richard K.; Bret, Patrice M.
1998-07-01
This paper describes the design and implementation of a low- cost PACS conforming to the DICOM 3.0 standard. The goal was to provide an efficient image archival and management solution on a heterogeneous hospital network as a basis for filmless radiology. The system follows a distributed, client/server model and was implemented at a fraction of the cost of a commercial PACS. It provides reliable archiving on recordable CD and allows access to digital images throughout the hospital and on the Internet. Dedicated servers have been designed for short-term storage, CD-based archival, data retrieval and remote data access or teleradiology. The short-term storage devices provide DICOM storage and query/retrieve services to scanners and workstations and approximately twelve weeks of 'on-line' image data. The CD-based archival and data retrieval processes are fully automated with the exception of CD loading and unloading. The system employs lossless compression on both short- and long-term storage devices. All servers communicate via the DICOM protocol in conjunction with both local and 'master' SQL-patient databases. Records are transferred from the local to the master database independently, ensuring that storage devices will still function if the master database server cannot be reached. The system features rules-based work-flow management and WWW servers to provide multi-platform remote data access. The WWW server system is distributed on the storage, retrieval and teleradiology servers allowing viewing of locally stored image data directly in a WWW browser without the need for data transfer to a central WWW server. An independent system monitors disk usage, processes, network and CPU load on each server and reports errors to the image management team via email. The PACS was implemented using a combination of off-the-shelf hardware, freely available software and applications developed in-house. The system has enabled filmless operation in CT, MR and ultrasound within the radiology department and throughout the hospital. The use of WWW technology has enabled the development of an intuitive we- based teleradiology and image management solution that provides complete access to image data.
Design of Accelerator Online Simulator Server Using Structured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Guobao; /Brookhaven; Chu, Chungming
2012-07-06
Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describesmore » the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.« less
Impact of malicious servers over trust and reputation models in wireless sensor networks
NASA Astrophysics Data System (ADS)
Verma, Vinod Kumar; Singh, Surinder; Pathak, N. P.
2016-03-01
This article deals with the impact of malicious servers over different trust and reputation models in wireless sensor networks. First, we analysed the five trust and reputation models, namely BTRM-WSN, Eigen trust, peer trust, power trust, linguistic fuzzy trust model. Further, we proposed wireless sensor network design for optimisation of these models. Finally, influence of malicious servers on the behaviour of above mentioned trust and reputation models is discussed. Statistical analysis has been carried out to prove the validity of our proposal.
Cole, Curtis L; Kanter, Andrew S; Cummens, Michael; Vostinar, Sean; Naeymi-Rad, Frank
2004-01-01
To design and implement a real world application using a terminology server to assist patients and physicians who use common language search terms to find specialist physicians with a particular clinical expertise. Terminology servers have been developed to help users encoding of information using complicated structured vocabulary during data entry tasks, such as recording clinical information. We describe a methodology using Personal Health Terminology trade mark and a SNOMED CT-based hierarchical concept server. Construction of a pilot mediated-search engine to assist users who use vernacular speech in querying data which is more technical than vernacular. This approach, which combines theoretical and practical requirements, provides a useful example of concept-based searching for physician referrals.
Implementation experience of a patient monitoring solution based on end-to-end standards.
Martinez, I; Fernandez, J; Galarraga, M; Serrano, L; de Toledo, P; Escayola, J; Jimenez-Fernandez, S; Led, S; Martinez-Espronceda, M; Garcia, J
2007-01-01
This paper presents a proof-of-concept design of a patient monitoring solution for Intensive Care Unit (ICU). It is end-to-end standards-based, using ISO/IEEE 11073 (X73) in the bedside environment and EN13606 to communicate the information to an Electronic Healthcare Record (EHR) server. At the bedside end a plug-and-play sensor network is implemented, which communicates with a gateway that collects the medical information and sends it to a monitoring server. At this point the server transforms the data frame into an EN13606 extract, to be stored on the EHR server. The presented system has been tested in a laboratory environment to demonstrate the feasibility of this end-to-end standards-based solution.
Intellectual Production Supervision Perform based on RFID Smart Electricity Meter
NASA Astrophysics Data System (ADS)
Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng
2018-03-01
This topic develops the RFID intelligent electricity meter production supervision project management system. The system is designed for energy meter production supervision in the management of the project schedule, quality and cost information management requirements in RFID intelligent power, and provide quantitative information more comprehensive, timely and accurate for supervision engineer and project manager management decisions, and to provide technical information for the product manufacturing stage file. From the angle of scheme analysis, design, implementation and test, the system development of production supervision project management system for RFID smart meter project is discussed. Focus on the development of the system, combined with the main business application and management mode at this stage, focuses on the energy meter to monitor progress information, quality information and cost based information on RFID intelligent power management function. The paper introduces the design scheme of the system, the overall client / server architecture, client oriented graphical user interface universal, complete the supervision of project management and interactive transaction information display, the server system of realizing the main program. The system is programmed with C# language and.NET operating environment, and the client and server platforms use Windows operating system, and the database server software uses Oracle. The overall platform supports mainstream information and standards and has good scalability.
IPG Job Manager v2.0 Design Documentation
NASA Technical Reports Server (NTRS)
Hu, Chaumin
2003-01-01
This viewgraph presentation provides a high-level design of the IPG Job Manager, and satisfies its Master Requirement Specification v2.0 Revision 1.0, 01/29/2003. The presentation includes a Software Architecture/Functional Overview with the following: Job Model; Job Manager Client/Server Architecture; Job Manager Client (Job Manager Client Class Diagram and Job Manager Client Activity Diagram); Job Manager Server (Job Manager Client Class Diagram and Job Manager Client Activity Diagram); Development Environment; Project Plan; Requirement Traceability.
Computational Prediction of the Immunomodulatory Potential of RNA Sequences.
Nagpal, Gandharva; Chaudhary, Kumardeep; Dhanda, Sandeep Kumar; Raghava, Gajendra Pal Singh
2017-01-01
Advances in the knowledge of various roles played by non-coding RNAs have stimulated the application of RNA molecules as therapeutics. Among these molecules, miRNA, siRNA, and CRISPR-Cas9 associated gRNA have been identified as the most potent RNA molecule classes with diverse therapeutic applications. One of the major limitations of RNA-based therapeutics is immunotoxicity of RNA molecules as it may induce the innate immune system. In contrast, RNA molecules that are potent immunostimulators are strong candidates for use in vaccine adjuvants. Thus, it is important to understand the immunotoxic or immunostimulatory potential of these RNA molecules. The experimental techniques for determining immunostimulatory potential of siRNAs are time- and resource-consuming. To overcome this limitation, recently our group has developed a web-based server "imRNA" for predicting the immunomodulatory potential of RNA sequences. This server integrates a number of modules that allow users to perform various tasks including (1) generation of RNA analogs with reduced immunotoxicity, (2) identification of highly immunostimulatory regions in RNA sequence, and (3) virtual screening. This server may also assist users in the identification of minimum mutations required in a given RNA sequence to minimize its immunomodulatory potential that is required for designing RNA-based therapeutics. Besides, the server can be used for designing RNA-based vaccine adjuvants as it may assist users in the identification of mutations required for increasing immunomodulatory potential of a given RNA sequence. In summary, this chapter describes major applications of the "imRNA" server in designing RNA-based therapeutics and vaccine adjuvants (http://www.imtech.res.in/raghava/imrna/).
Implementing a Dynamic Database-Driven Course Using LAMP
ERIC Educational Resources Information Center
Laverty, Joseph Packy; Wood, David; Turchek, John
2011-01-01
This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…
Abnormal Web Usage Control by Proxy Strategies.
ERIC Educational Resources Information Center
Yu, Hsiang-Fu; Tseng, Li-Ming
2002-01-01
Approaches to designing a proxy server with Web usage control and to making the proxy server effective on local area networks are proposed to prevent abnormal Web access and to prioritize Web usage. A system is implemented to demonstrate the approaches. The implementation reveals that the proposed approaches are effective, such that the abnormal…
Understanding Customer Dissatisfaction with Underutilized Distributed File Servers
NASA Technical Reports Server (NTRS)
Riedel, Erik; Gibson, Garth
1996-01-01
An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.
The HydroServer Platform for Sharing Hydrologic Data
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.
2010-12-01
The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its parts in advancing hydrologic research. Details of the CUAHSI HIS can be found at http://his.cuahsi.org, and HydroServer codeplex site http://hydroserver.codeplex.com.
Nationwide SIP Telephony Network Design to Prevent Congestion Caused by Disaster
NASA Astrophysics Data System (ADS)
Satoh, Daisuke; Ashitagawa, Kyoko
We present a session initiation protocol (SIP) network design for a voice-over-IP network to prevent congestion caused by people calling friends and family after a disaster. The design increases the capacity of SIP servers in a network by using all of the SIP servers equally. It takes advantage of the fact that equipment for voice data packets is different from equipment for signaling packets in SIP networks. Furthermore, the design achieves simple routing on the basis of telephone numbers. We evaluated the performance of our design in preventing congestion through simulation. We showed that the proposed design has roughly 20 times more capacity, which is 57 times the normal load, than the conventional design if a disaster were to occur in Niigata Prefecture struck by the Chuetsu earthquake in 2004.
Compound toxicity screening and structure-activity relationship modeling in Escherichia coli.
Planson, Anne-Gaëlle; Carbonell, Pablo; Paillard, Elodie; Pollet, Nicolas; Faulon, Jean-Loup
2012-03-01
Synthetic biology and metabolic engineering are used to develop new strategies for producing valuable compounds ranging from therapeutics to biofuels in engineered microorganisms. When developing methods for high-titer production cells, toxicity is an important element to consider. Indeed the production rate can be limited due to toxic intermediates or accumulation of byproducts of the heterologous biosynthetic pathway of interest. Conversely, highly toxic molecules are desired when designing antimicrobials. Compound toxicity in bacteria plays a major role in metabolic engineering as well as in the development of new antibacterial agents. Here, we screened a diversified chemical library of 166 compounds for toxicity in Escherichia coli. The dataset was built using a clustering algorithm maximizing the chemical diversity in the library. The resulting assay data was used to develop a toxicity predictor that we used to assess the toxicity of metabolites throughout the metabolome. This new tool for predicting toxicity can thus be used for fine-tuning heterologous expression and can be integrated in a computational-framework for metabolic pathway design. Many structure-activity relationship tools have been developed for toxicology studies in eukaryotes [Valerio (2009), Toxicol Appl Pharmacol, 241(3): 356-370], however, to the best of our knowledge we present here the first E. coli toxicity prediction web server based on QSAR models (EcoliTox server: http://www.issb.genopole.fr/∼faulon/EcoliTox.php). Copyright © 2011 Wiley Periodicals, Inc.
Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robison, AD; Page, Christina; Lytle, Bob
The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less
Assessment of Risk Communication about Undercooked Hamburgers by Restaurant Servers.
Thomas, Ellen M; Binder, Andrew R; McLAUGHLIN, Anne; Jaykus, Lee-Ann; Hanson, Dana; Powell, Douglas; Chapman, Benjamin
2016-12-01
According to the U.S. Food and Drug Administration 2013 Model Food Code, it is the duty of a food establishment to disclose and remind consumers of risk when ordering undercooked food such as ground beef. The purpose of this study was to explore actual risk communication behaviors of food establishment servers. Secret shoppers visited 265 restaurants in seven geographic locations across the United States, ordered medium rare burgers, and collected and coded risk information from chain and independent restaurant menus and from server responses. The majority of servers reported an unreliable method of doneness (77%) or other incorrect information (66%) related to burger doneness and safety. These results indicate major gaps in server knowledge and risk communication, and the current risk communication language in the Model Food Code does not sufficiently fill these gaps. The question is "should servers even be acting as risk communicators?" There are numerous challenges associated with this practice, including high turnover rates, limited education, and the high stress environment based on pleasing a customer. If servers are designated as risk communicators, food establishment staff should be adequately trained and provided with consumer advisory messages that are accurate, audience appropriate, and delivered in a professional manner so that customers can make informed food safety decisions.
The design of a petabyte archive and distribution system for the NASA ECS project
NASA Technical Reports Server (NTRS)
Caulk, Parris M.
1994-01-01
The NASA EOS Data and Information System (EOSDIS) Core System (ECS) will contain one of the largest data management systems ever built - the ECS Science and Data Processing System (SDPS). SDPS is designed to support long term Global Change Research by acquiring, producing, and storing earth science data, and by providing efficient means for accessing and manipulating that data. The first two releases of SDPS, Release A and Release B, will be operational in 1997 and 1998, respectively. Release B will be deployed at eight Distributed Active Archiving Centers (DAAC's). Individual DAAC's will archive different collections of earth science data, and will vary in archive capacity. The storage and management of these data collections is the responsibility of the SDPS Data Server subsystem. It is anticipated that by the year 2001, the Data Server subsystem at the Goddard DAAC must support a near-line data storage capacity of one petabyte. The development of SDPS is a system integration effort in which COTS products will be used in favor of custom components in very possible way. Some software and hardware capabilities required to meet ECS data volume and storage management requirements beyond 1999 are not yet supported by available COTS products. The ECS project will not undertake major custom development efforts to provide these capabilities. Instead, SDPS and its Data Server subsystem are designed to support initial implementations with current products, and provide an evolutionary framework that facilitates the introduction of advanced COTS products as they become available. This paper provides a high-level description of the Data Server subsystem design from a COTS integration standpoint, and discussed some of the major issues driving the design. The paper focuses on features of the design that will make the system scalable and adaptable to changing technologies.
Using NetCloak to develop server-side Web-based experiments without writing CGI programs.
Wolfe, Christopher R; Reyna, Valerie F
2002-05-01
Server-side experiments use the Web server, rather than the participant's browser, to handle tasks such as random assignment, eliminating inconsistencies with JAVA and other client-side applications. Heretofore, experimenters wishing to create server-side experiments have had to write programs to create common gateway interface (CGI) scripts in programming languages such as Perl and C++. NetCloak uses simple, HTML-like commands to create CGIs. We used NetCloak to implement an experiment on probability estimation. Measurements of time on task and participants' IP addresses assisted quality control. Without prior training, in less than 1 month, we were able to use NetCloak to design and create a Web-based experiment and to help graduate students create three Web-based experiments of their own.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
PEM public key certificate cache server
NASA Astrophysics Data System (ADS)
Cheung, T.
1993-12-01
Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.
Volume serving and media management in a networked, distributed client/server environment
NASA Technical Reports Server (NTRS)
Herring, Ralph H.; Tefend, Linda L.
1993-01-01
The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.
A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System
ERIC Educational Resources Information Center
Chim, Hung; Deng, Xiaotie
2008-01-01
We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…
ERIC Educational Resources Information Center
Mui, Amy B.; Nelson, Sarah; Huang, Bruce; He, Yuhong; Wilson, Kathi
2015-01-01
This paper describes a web-enabled learning platform providing remote access to geospatial software that extends the learning experience outside of the laboratory setting. The platform was piloted in two undergraduate courses, and includes a software server, a data server, and remote student users. The platform was designed to improve the quality…
PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices
ERIC Educational Resources Information Center
Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões
2013-01-01
This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…
Choi, Younsung; Nam, Junghyun; Lee, Donghoon; Kim, Jiye; Jung, Jaewook; Won, Dongho
2014-01-01
An anonymous user authentication scheme allows a user, who wants to access a remote application server, to achieve mutual authentication and session key establishment with the server in an anonymous manner. To enhance the security of such authentication schemes, recent researches combined user's biometrics with a password. However, these authentication schemes are designed for single server environment. So when a user wants to access different application servers, the user has to register many times. To solve this problem, Chuang and Chen proposed an anonymous multiserver authenticated key agreement scheme using smart cards together with passwords and biometrics. Chuang and Chen claimed that their scheme not only supports multiple servers but also achieves various security requirements. However, we show that this scheme is vulnerable to a masquerade attack, a smart card attack, a user impersonation attack, and a DoS attack and does not achieve perfect forward secrecy. We also propose a security enhanced anonymous multiserver authenticated key agreement scheme which addresses all the weaknesses identified in Chuang and Chen's scheme.
Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho
2015-01-01
In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user's management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.'s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.'s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.'s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties.
A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services
NASA Astrophysics Data System (ADS)
Cho, Kenjiro; Birman, Kenneth P.
1994-05-01
This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.
Design and evaluation of web-based image transmission and display with different protocols
NASA Astrophysics Data System (ADS)
Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo
2011-03-01
There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.
SAbPred: a structure-based antibody prediction server
Dunbar, James; Krawczyk, Konrad; Leem, Jinwoo; Marks, Claire; Nowak, Jaroslaw; Regep, Cristian; Georges, Guy; Kelm, Sebastian; Popovic, Bojana; Deane, Charlotte M.
2016-01-01
SAbPred is a server that makes predictions of the properties of antibodies focusing on their structures. Antibody informatics tools can help improve our understanding of immune responses to disease and aid in the design and engineering of therapeutic molecules. SAbPred is a single platform containing multiple applications which can: number and align sequences; automatically generate antibody variable fragment homology models; annotate such models with estimated accuracy alongside sequence and structural properties including potential developability issues; predict paratope residues; and predict epitope patches on protein antigens. The server is available at http://opig.stats.ox.ac.uk/webapps/sabpred. PMID:27131379
A visualization environment for supercomputing-based applications in computational mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.
1993-06-01
In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.
MetaRanker 2.0: a web server for prioritization of genetic variation data
Pers, Tune H.; Dworzyński, Piotr; Thomas, Cecilia Engel; Lage, Kasper; Brunak, Søren
2013-01-01
MetaRanker 2.0 is a web server for prioritization of common and rare frequency genetic variation data. Based on heterogeneous data sets including genetic association data, protein–protein interactions, large-scale text-mining data, copy number variation data and gene expression experiments, MetaRanker 2.0 prioritizes the protein-coding part of the human genome to shortlist candidate genes for targeted follow-up studies. MetaRanker 2.0 is made freely available at www.cbs.dtu.dk/services/MetaRanker-2.0. PMID:23703204
MetaRanker 2.0: a web server for prioritization of genetic variation data.
Pers, Tune H; Dworzyński, Piotr; Thomas, Cecilia Engel; Lage, Kasper; Brunak, Søren
2013-07-01
MetaRanker 2.0 is a web server for prioritization of common and rare frequency genetic variation data. Based on heterogeneous data sets including genetic association data, protein-protein interactions, large-scale text-mining data, copy number variation data and gene expression experiments, MetaRanker 2.0 prioritizes the protein-coding part of the human genome to shortlist candidate genes for targeted follow-up studies. MetaRanker 2.0 is made freely available at www.cbs.dtu.dk/services/MetaRanker-2.0.
Defense strategies for cloud computing multi-site server infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; He, Fei
We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, andmore » also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.« less
DIANA-microT web server v5.0: service integration into miRNA functional analysis workflows.
Paraskevopoulou, Maria D; Georgakilas, Georgios; Kostoulas, Nikos; Vlachos, Ioannis S; Vergoulis, Thanasis; Reczko, Martin; Filippidis, Christos; Dalamagas, Theodore; Hatzigeorgiou, A G
2013-07-01
MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANA-microT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA-gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANA-microT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines.
DIANA-microT web server v5.0: service integration into miRNA functional analysis workflows
Paraskevopoulou, Maria D.; Georgakilas, Georgios; Kostoulas, Nikos; Vlachos, Ioannis S.; Vergoulis, Thanasis; Reczko, Martin; Filippidis, Christos; Dalamagas, Theodore; Hatzigeorgiou, A.G.
2013-01-01
MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANA-microT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA–gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANA-microT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines. PMID:23680784
incaRNAfbinv: a web server for the fragment-based design of RNA sequences
Drory Retwitzer, Matan; Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme; Barash, Danny
2016-01-01
Abstract In recent years, new methods for computational RNA design have been developed and applied to various problems in synthetic biology and nanotechnology. Lately, there is considerable interest in incorporating essential biological information when solving the inverse RNA folding problem. Correspondingly, RNAfbinv aims at including biologically meaningful constraints and is the only program to-date that performs a fragment-based design of RNA sequences. In doing so it allows the design of sequences that do not necessarily exactly fold into the target, as long as the overall coarse-grained tree graph shape is preserved. Augmented by the weighted sampling algorithm of incaRNAtion, our web server called incaRNAfbinv implements the method devised in RNAfbinv and offers an interactive environment for the inverse folding of RNA using a fragment-based design approach. It takes as input: a target RNA secondary structure; optional sequence and motif constraints; optional target minimum free energy, neutrality and GC content. In addition to the design of synthetic regulatory sequences, it can be used as a pre-processing step for the detection of novel natural occurring RNAs. The two complementary methodologies RNAfbinv and incaRNAtion are merged together and fully implemented in our web server incaRNAfbinv, available at http://www.cs.bgu.ac.il/incaRNAfbinv. PMID:27185893
2009-01-01
Background The majority of the genes even in well-studied multi-cellular model organisms have not been functionally characterized yet. Mining the numerous genome wide data sets related to protein function to retrieve potential candidate genes for a particular biological process remains a challenge. Description GExplore has been developed to provide a user-friendly database interface for data mining at the gene expression/protein function level to help in hypothesis development and experiment design. It supports combinatorial searches for proteins with certain domains, tissue- or developmental stage-specific expression patterns, and mutant phenotypes. GExplore operates on a stand-alone database and has fast response times, which is essential for exploratory searches. The interface is not only user-friendly, but also modular so that it accommodates additional data sets in the future. Conclusion GExplore is an online database for quick mining of data related to gene and protein function, providing a multi-gene display of data sets related to the domain composition of proteins as well as expression and phenotype data. GExplore is publicly available at: http://genome.sfu.ca/gexplore/ PMID:19917126
WMT: The CSDMS Web Modeling Tool
NASA Astrophysics Data System (ADS)
Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.
2015-12-01
The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.
Development and process evaluation of a web-based responsible beverage service training program
2012-01-01
Background Responsible beverage service (RBS) training designed to improve the appropriate service of alcohol in commercial establishments is typically delivered in workshops. Recently, Web-based RBS training programs have emerged. This report describes the formative development and subsequent design of an innovative Web-delivered RBS program, and evaluation of the impact of the program on servers’ knowledge, attitudes, and self-efficacy. Methods Formative procedures using focus groups and usability testing were used to develop a Web-based RBS training program. Professional alcohol servers (N = 112) who worked as servers and/or mangers in alcohol service settings were recruited to participate. A pre-post assessment design was used to assess changes associated with using the program. Results Participants who used the program showed significant improvements in their RBS knowledge, attitudes, and self-efficacy. Conclusions Although the current study did not directly observe and determine impact of the intervention on server behaviors, it demonstrated that the development process incorporating input from a multidisciplinary team in conjunction with feedback from end-users resulted in creation of a Web-based RBS program that was well-received by servers and that changed relevant knowledge, attitudes, and self-efficacy. The results also help to establish a needed evidence base in support of the use of online RBS training, which has been afforded little research attention. PMID:22999419
Ku, Hao-Hsiang
2015-01-01
Nowadays, people can easily use a smartphone to get wanted information and requested services. Hence, this study designs and proposes a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oritened clustering case-based reasoning mechanism, which is called GoSIDE, based on Arduino and Open Service Gateway initative (OSGi). GoSIDE is a three-tier architecture, which is composed of Mobile Users, Application Servers and a Cloud-based Digital Convergence Server. A mobile user is with a smartphone and Kinect sensors to detect the user's Golf swing actions and to interact with iDTV. An application server is with Intelligent Golf Swing Posture Analysis Model (iGoSPAM) to check a user's Golf swing actions and to alter this user when he is with error actions. Cloud-based Digital Convergence Server is with Ontology-oriented Clustering Case-based Reasoning (CBR) for Quality of Experiences (OCC4QoE), which is designed to provide QoE services by QoE-based Ontology strategies, rules and events for this user. Furthermore, GoSIDE will automatically trigger OCC4QoE and deliver popular rules for a new user. Experiment results illustrate that GoSIDE can provide appropriate detections for Golfers. Finally, GoSIDE can be a reference model for researchers and engineers.
Experiment Management System for the SND Detector
NASA Astrophysics Data System (ADS)
Pugachev, K.
2017-10-01
We present a new experiment management system for the SND detector at the VEPP-2000 collider (Novosibirsk). An important part to report about is access to experimental databases (configuration, conditions and metadata). The system is designed in client-server architecture. User interaction comes true using web-interface. The server side includes several logical layers: user interface templates; template variables description and initialization; implementation details. The templates are meant to involve as less IT knowledge as possible. Experiment configuration, conditions and metadata are stored in a database. To implement the server side Node.js, a modern JavaScript framework, has been chosen. A new template engine having an interesting feature is designed. A part of the system is put into production. It includes templates dealing with showing and editing first level trigger configuration and equipment configuration and also showing experiment metadata and experiment conditions data index.
New web technologies for astronomy
NASA Astrophysics Data System (ADS)
Sprimont, P.-G.; Ricci, D.; Nicastro, L.
2014-12-01
Thanks to the new HTML5 capabilities and the huge improvements of the JavaScript language, it is now possible to design very complex and interactive web user interfaces. On top of that, the once monolithic and file-server oriented web servers are evolving into easily programmable server applications capable to cope with the complex interactions made possible by the new generation of browsers. We believe that the whole community of amateur and professionals astronomers can benefit from the potential of these new technologies. New web interfaces can be designed to provide the user with a large deal of much more intuitive and interactive tools. Accessing astronomical data archives, schedule, control and monitor observatories, and in particular robotic telescopes, supervising data reduction pipelines, all are capabilities that can now be implemented in a JavaScript web application. In this paper we describe the Sadira package we are implementing exactly to this aim.
On the Design of a Comprehensive Authorisation Framework for Service Oriented Architecture (SOA)
2013-07-01
Authentication Server AZM Authorisation Manager AZS Authorisation Server BP Business Process BPAA Business Process Authorisation Architecture BPAD Business...Internet Protocol Security JAAS Java Authentication and Authorisation Service MAC Mandatory Access Control RBAC Role Based Access Control RCA Regional...the authentication process, make authorisation decisions using application specific access control functions that results in the practice of
A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.
ERIC Educational Resources Information Center
Cohen, Laura B.
2003-01-01
Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
TCRmodel: high resolution modeling of T cell receptors from sequence.
Gowthaman, Ragul; Pierce, Brian G
2018-05-22
T cell receptors (TCRs), along with antibodies, are responsible for specific antigen recognition in the adaptive immune response, and millions of unique TCRs are estimated to be present in each individual. Understanding the structural basis of TCR targeting has implications in vaccine design, autoimmunity, as well as T cell therapies for cancer. Given advances in deep sequencing leading to immune repertoire-level TCR sequence data, fast and accurate modeling methods are needed to elucidate shared and unique 3D structural features of these molecules which lead to their antigen targeting and cross-reactivity. We developed a new algorithm in the program Rosetta to model TCRs from sequence, and implemented this functionality in a web server, TCRmodel. This web server provides an easy to use interface, and models are generated quickly that users can investigate in the browser and download. Benchmarking of this method using a set of nonredundant recently released TCR crystal structures shows that models are accurate and compare favorably to models from another available modeling method. This server enables the community to obtain insights into TCRs of interest, and can be combined with methods to model and design TCR recognition of antigens. The TCRmodel server is available at: http://tcrmodel.ibbr.umd.edu/.
A dictionary server for supplying context sensitive medical knowledge.
Ruan, W; Bürkle, T; Dudeck, J
2000-01-01
The Giessen Data Dictionary Server (GDDS), developed at Giessen University Hospital, integrates clinical systems with on-line, context sensitive medical knowledge to help with making medical decisions. By "context" we mean the clinical information that is being presented at the moment the information need is occurring. The dictionary server makes use of a semantic network supported by a medical data dictionary to link terms from clinical applications to their proper information sources. It has been designed to analyze the network structure itself instead of knowing the layout of the semantic net in advance. This enables us to map appropriate information sources to various clinical applications, such as nursing documentation, drug prescription and cancer follow up systems. This paper describes the function of the dictionary server and shows how the knowledge stored in the semantic network is used in the dictionary service.
Can "patient keeper" help in-patients?
Al-Hinnawi, M F
2009-06-01
The aim of this paper is to present our "Patient Keeper" application, which is a client-server medical application. "Patient Keeper" is designed to run on a mobile phone for the client application and on a PC for the server application using J2ME and JAVA2, respectively. This application can help doctors during visits to their patients in hospitals. The client application allows doctors to store on their mobile phones the results of their diagnoses and findings such as temperature, blood pressure, medications, analysis, etc., and send this information to the server via short message service (SMS) for storage in a database. The server can also respond to any request from the client and send the result via Bluetooth, infrared, or over the air. Experimental results showed a significant improvement of the healthcare delivery and reduction for in-patient stay.
Design and development of a mobile system for supporting emergency triage.
Michalowski, W; Slowinski, R; Wilk, S; Farion, K J; Pike, J; Rubin, S
2005-01-01
Our objective was to design and develop a mobile clinical decision support system for emergency triage of different acute pain presentations. The system should interact with existing hospital information systems, run on mobile computing devices (handheld computers) and be suitable for operation in weak-connectivity conditions (with unstable connections between mobile clients and a server). The MET (Mobile Emergency Triage) system was designed following an extended client-server architecture. The client component, responsible for triage decision support, is built as a knowledge-based system, with domain ontology separated from generic problem solving methods and used for the automatic creation of a user interface. The MET system is well suited for operation in the Emergency Department of a hospital. The system's external interactions are managed by the server, while the MET clients, running on handheld computers are used by clinicians for collecting clinical data and supporting triage at the bedside. The functionality of the MET client is distributed into specialized modules, responsible for triaging specific types of acute pain presentations. The modules are stored on the server, and on request they can be transferred and executed on the mobile clients. The modular design provides for easy extension of the system's functionality. A clinical trial of the MET system validated the appropriateness of the system's design, and proved the usefulness and acceptance of the system in clinical practice. The MET system captures the necessary hospital data, allows for entry of patient information, and provides triage support. By operating on handheld computers, it fits into the regular emergency department workflow without introducing any hindrances or disruptions. It supports triage anytime and anywhere, directly at the point of care, and also can be used as an electronic patient chart, facilitating structured data collection.
Xu, Youjun; Wang, Shiwei; Hu, Qiwan; Gao, Shuaishi; Ma, Xiaomin; Zhang, Weilin; Shen, Yihang; Chen, Fangjin; Lai, Luhua; Pei, Jianfeng
2018-05-10
CavityPlus is a web server that offers protein cavity detection and various functional analyses. Using protein three-dimensional structural information as the input, CavityPlus applies CAVITY to detect potential binding sites on the surface of a given protein structure and rank them based on ligandability and druggability scores. These potential binding sites can be further analysed using three submodules, CavPharmer, CorrSite, and CovCys. CavPharmer uses a receptor-based pharmacophore modelling program, Pocket, to automatically extract pharmacophore features within cavities. CorrSite identifies potential allosteric ligand-binding sites based on motion correlation analyses between cavities. CovCys automatically detects druggable cysteine residues, which is especially useful to identify novel binding sites for designing covalent allosteric ligands. Overall, CavityPlus provides an integrated platform for analysing comprehensive properties of protein binding cavities. Such analyses are useful for many aspects of drug design and discovery, including target selection and identification, virtual screening, de novo drug design, and allosteric and covalent-binding drug design. The CavityPlus web server is freely available at http://repharma.pku.edu.cn/cavityplus or http://www.pkumdl.cn/cavityplus.
NASA Astrophysics Data System (ADS)
Nugraha, Ucu
2017-06-01
Village is the level under the sub-district level in the governmental system in a region where the information system of population data service is majority provided in a manual system. However, such systems frequently lead to invalid data in addition to the available data that does not correspond to the facts as the impact of frequent errors in the process of data collection related to population including the data of the elderly and the process of data transfer. Similarly, the data correspondences such as death certificate, birth certificate, a certificate of domicile change, and so forth, have their own problems. Data archives are frequently non-systematic because they are not organized properly or not stored in a database. Nevertheless, information service system for population census at this level can assist government agencies, especially in the management of population census at the village level. A designed system can make the process of a population census easier. It is initiated by the submission of population letter by each citizen who comes to the village administrative office. Population census information system based on client-server at Bagolo Village was designed in effective and non-complicated workflow and interface design. By using the client-server as the basis, the data will be stored centrally on the server, so it can reduce data duplication and data loss. Therefore, when the local governments require data information related to the population data of a village, they can obtain it easily without the need to collect the data directly at the respective village.
Distributed control system for demand response by servers
NASA Astrophysics Data System (ADS)
Hall, Joseph Edward
Within the broad topical designation of smart grid, research in demand response, or demand-side management, focuses on investigating possibilities for electrically powered devices to adapt their power consumption patterns to better match generation and more efficiently integrate intermittent renewable energy sources, especially wind. Devices such as battery chargers, heating and cooling systems, and computers can be controlled to change the time, duration, and magnitude of their power consumption while still meeting workload constraints such as deadlines and rate of throughput. This thesis presents a system by which a computer server, or multiple servers in a data center, can estimate the power imbalance on the electrical grid and use that information to dynamically change the power consumption as a service to the grid. Implementation on a testbed demonstrates the system with a hypothetical but realistic usage case scenario of an online video streaming service in which there are workloads with deadlines (high-priority) and workloads without deadlines (low-priority). The testbed is implemented with real servers, estimates the power imbalance from the grid frequency with real-time measurements of the live outlet, and uses a distributed, real-time algorithm to dynamically adjust the power consumption of the servers based on the frequency estimate and the throughput of video transcoder workloads. Analysis of the system explains and justifies multiple design choices, compares the significance of the system in relation to similar publications in the literature, and explores the potential impact of the system.
An accountability server for health care information systems.
Kowalski, S
1994-02-01
The paper starts off by first briefly discussing the necessary ethical, legal and administrative/management controls that are required before the mechanisms of accountability controls can be implemented in automated clinical patient record systems. After these social aspects are discussed the technical aspects of the ALS are outlined. The security concepts of the ECMA framework are reviewed and used to explain the technical design of the ALS. A walk-through of the server in a typical patient record transaction is used to explain the operations of the server. The paper concludes with a general discussion of the usefulness of accountability mechanisms in making security in health care information work in practice.
Min, Xiang Jia
2013-01-01
Expressed Sequence Tags (ESTs) are a rich resource for identifying Alternatively Splicing (AS) genes. The ASFinder webserver is designed to identify AS isoforms from EST-derived sequences. Two approaches are implemented in ASFinder. If no genomic sequences are provided, the server performs a local BLASTN to identify AS isoforms from ESTs having both ends aligned but an internal segment unaligned. Otherwise, ASFinder uses SIM4 to map ESTs to the genome, then the overlapping ESTs that are mapped to the same genomic locus and have internal variable exon/intron boundaries are identified as AS isoforms. The tool is available at http://proteomics.ysu.edu/tools/ASFinder.html.
[Design and establishment of modern literature database about acupuncture Deqi].
Guo, Zheng-rong; Qian, Gui-feng; Pan, Qiu-yin; Wang, Yang; Xin, Si-yuan; Li, Jing; Hao, Jie; Hu, Ni-juan; Zhu, Jiang; Ma, Liang-xiao
2015-02-01
A search on acupuncture Deqi was conducted using four Chinese-language biomedical databases (CNKI, Wan-Fang, VIP and CBM) and PubMed database and using keywords "Deqi" or "needle sensation" "needling feeling" "needle feel" "obtaining qi", etc. Then, a "Modern Literature Database for Acupuncture Deqi" was established by employing Microsoft SQL Server 2005 Express Edition, introducing the contents, data types, information structure and logic constraint of the system table fields. From this Database, detailed inquiries about general information of clinical trials, acupuncturists' experience, ancient medical works, comprehensive literature, etc. can be obtained. The present databank lays a foundation for subsequent evaluation of literature quality about Deqi and data mining of undetected Deqi knowledge.
Ioannidis, Vassilios; van Nimwegen, Erik; Stockinger, Heinz
2016-01-01
ISMARA ( ismara.unibas.ch) automatically infers the key regulators and regulatory interactions from high-throughput gene expression or chromatin state data. However, given the large sizes of current next generation sequencing (NGS) datasets, data uploading times are a major bottleneck. Additionally, for proprietary data, users may be uncomfortable with uploading entire raw datasets to an external server. Both these problems could be alleviated by providing a means by which users could pre-process their raw data locally, transferring only a small summary file to the ISMARA server. We developed a stand-alone client application that pre-processes large input files (RNA-seq or ChIP-seq data) on the user's computer for performing ISMARA analysis in a completely automated manner, including uploading of small processed summary files to the ISMARA server. This reduces file sizes by up to a factor of 1000, and upload times from many hours to mere seconds. The client application is available from ismara.unibas.ch/ISMARA/client. PMID:28232860
Accessing multimedia content from mobile applications using semantic web technologies
NASA Astrophysics Data System (ADS)
Kreutel, Jörn; Gerlach, Andrea; Klekamp, Stefanie; Schulz, Kristin
2014-02-01
We describe the ideas and results of an applied research project that aims at leveraging the expressive power of semantic web technologies as a server-side backend for mobile applications that provide access to location and multimedia data and allow for a rich user experience in mobile scenarios, ranging from city and museum guides to multimedia enhancements of any kind of narrative content, including e-book applications. In particular, we will outline a reusable software architecture for both server-side functionality and native mobile platforms that is aimed at significantly decreasing the effort required for developing particular applications of that kind.
PWMScan: a fast tool for scanning entire genomes with a position-specific weight matrix.
Ambrosini, Giovanna; Groux, Romain; Bucher, Philipp
2018-03-05
Transcription factors (TFs) regulate gene expression by binding to specific short DNA sequences of 5 to 20-bp to regulate the rate of transcription of genetic information from DNA to messenger RNA. We present PWMScan, a fast web-based tool to scan server-resident genomes for matches to a user-supplied PWM or TF binding site model from a public database. The web server and source code are available at http://ccg.vital-it.ch/pwmscan and https://sourceforge.net/projects/pwmscan, respectively. giovanna.ambrosini@epfl.ch. SUPPLEMENTARY DATA ARE AVAILABLE AT BIOINFORMATICS ONLINE.
Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho
2015-01-01
In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user’s management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.’s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.’s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.’s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties. PMID:26709702
Choi, Younsung; Nam, Junghyun; Lee, Donghoon; Kim, Jiye; Jung, Jaewook; Won, Dongho
2014-01-01
An anonymous user authentication scheme allows a user, who wants to access a remote application server, to achieve mutual authentication and session key establishment with the server in an anonymous manner. To enhance the security of such authentication schemes, recent researches combined user's biometrics with a password. However, these authentication schemes are designed for single server environment. So when a user wants to access different application servers, the user has to register many times. To solve this problem, Chuang and Chen proposed an anonymous multiserver authenticated key agreement scheme using smart cards together with passwords and biometrics. Chuang and Chen claimed that their scheme not only supports multiple servers but also achieves various security requirements. However, we show that this scheme is vulnerable to a masquerade attack, a smart card attack, a user impersonation attack, and a DoS attack and does not achieve perfect forward secrecy. We also propose a security enhanced anonymous multiserver authenticated key agreement scheme which addresses all the weaknesses identified in Chuang and Chen's scheme. PMID:25276847
2013-01-01
Background The binding of transcription factors to DNA plays an essential role in the regulation of gene expression. Numerous experiments elucidated binding sequences which subsequently have been used to derive statistical models for predicting potential transcription factor binding sites (TFBS). The rapidly increasing number of genome sequence data requires sophisticated computational approaches to manage and query experimental and predicted TFBS data in the context of other epigenetic factors and across different organisms. Results We have developed D-Light, a novel client-server software package to store and query large amounts of TFBS data for any number of genomes. Users can add small-scale data to the server database and query them in a large scale, genome-wide promoter context. The client is implemented in Java and provides simple graphical user interfaces and data visualization. Here we also performed a statistical analysis showing what a user can expect for certain parameter settings and we illustrate the usage of D-Light with the help of a microarray data set. Conclusions D-Light is an easy to use software tool to integrate, store and query annotation data for promoters. A public D-Light server, the client and server software for local installation and the source code under GNU GPL license are available at http://biwww.che.sbg.ac.at/dlight. PMID:23617301
A Design of a Network Model to the Electric Power Trading System Using Web Services
NASA Astrophysics Data System (ADS)
Maruo, Tomoaki; Matsumoto, Keinosuke; Mori, Naoki; Kitayama, Masashi; Izumi, Yoshio
Web services are regarded as a new application paradigm in the world of the Internet. On the other hand, many business models of a power trading system has been proposed to aim at load reduction by consumers cooperating with electric power suppliers in an electric power market. Then, we propose a network model of power trading system using Web service in this paper. The adaptability of Web services to power trading system was checked in the prototype of our network model and we got good results for it. Each server provides functions as a SOAP server, and it is coupled loosely with each other through SOAP. Storing SOAP message in HTTP packet can establish the penetration communication way that is not conscious of a firewall. Switching of a dynamic server is possible by means of rewriting the server point information on WSDL at the time of obstacle generating.
A dictionary server for supplying context sensitive medical knowledge.
Ruan, W.; Bürkle, T.; Dudeck, J.
2000-01-01
The Giessen Data Dictionary Server (GDDS), developed at Giessen University Hospital, integrates clinical systems with on-line, context sensitive medical knowledge to help with making medical decisions. By "context" we mean the clinical information that is being presented at the moment the information need is occurring. The dictionary server makes use of a semantic network supported by a medical data dictionary to link terms from clinical applications to their proper information sources. It has been designed to analyze the network structure itself instead of knowing the layout of the semantic net in advance. This enables us to map appropriate information sources to various clinical applications, such as nursing documentation, drug prescription and cancer follow up systems. This paper describes the function of the dictionary server and shows how the knowledge stored in the semantic network is used in the dictionary service. PMID:11079978
NASA Astrophysics Data System (ADS)
Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.
2014-06-01
PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.
2004-03-01
with MySQL . This choice was made because MySQL is open source. Any significant database engine such as Oracle or MS- SQL or even MS Access can be used...10 Figure 6. The DoD vs . Commercial Life Cycle...necessarily be interested in SCADA network security 13. MySQL (Database server) – This station represents a typical data server for a web page
System Design for Navy Occupational Standards Development
2014-07-01
including, Mr. Thomas Crain, Deputy Director, Workforce Classifications Department, LCDR Juan Carrasco, Michele Jackson, and Johnny Powell. David...and Carrasco, Juan ; Navy Job Analysis Management Project Description, NAVMAC, January 2010. 34 Lists of validated tasks, sorted by Functional...34 runat="server"> <div> <rsweb:ReportViewer ID="ReportViewerSample" runat="server" Font -Names="Verdana" Font -Size=Ŝpt
Defense in Depth Added to Malicious Activities Simulation Tools (MAST)
2015-09-01
cipher suites. The TLS Handshake is a combination of three components: handshake, change cipher spec, and alert. 41 (1) The Handshake ( Hello ) The...TLS Handshake, specifically the “ Hello ” portion, is designed to negotiate session parameters (cipher suite). The client informs the server of the...protocols and standards that it supports and then the server selects the highest common protocols and standards. Specifically, the Client Hello message
The PARIGA server for real time filtering and analysis of reciprocal BLAST results.
Orsini, Massimiliano; Carcangiu, Simone; Cuccuru, Gianmauro; Uva, Paolo; Tramontano, Anna
2013-01-01
BLAST-based similarity searches are commonly used in several applications involving both nucleotide and protein sequences. These applications span from simple tasks such as mapping sequences over a database to more complex procedures as clustering or annotation processes. When the amount of analysed data increases, manual inspection of BLAST results become a tedious procedure. Tools for parsing or filtering BLAST results for different purposes are then required. We describe here PARIGA (http://resources.bioinformatica.crs4.it/pariga/), a server that enables users to perform all-against-all BLAST searches on two sets of sequences selected by the user. Moreover, since it stores the two BLAST output in a python-serialized-objects database, results can be filtered according to several parameters in real-time fashion, without re-running the process and avoiding additional programming efforts. Results can be interrogated by the user using logical operations, for example to retrieve cases where two queries match same targets, or when sequences from the two datasets are reciprocal best hits, or when a query matches a target in multiple regions. The Pariga web server is designed to be a helpful tool for managing the results of sequence similarity searches. The design and implementation of the server renders all operations very fast and easy to use.
SDS: A Framework for Scientific Data Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Bin; Byna, Surendra; Wu, Kesheng
2013-10-31
Large-scale scientific applications typically write their data to parallel file systems with organizations designed to achieve fast write speeds. Analysis tasks frequently read the data in a pattern that is different from the write pattern, and therefore experience poor I/O performance. In this paper, we introduce a prototype framework for bridging the performance gap between write and read stages of data access from parallel file systems. We call this framework Scientific Data Services, or SDS for short. This initial implementation of SDS focuses on reorganizing previously written files into data layouts that benefit read patterns, and transparently directs read callsmore » to the reorganized data. SDS follows a client-server architecture. The SDS Server manages partial or full replicas of reorganized datasets and serves SDS Clients' requests for data. The current version of the SDS client library supports HDF5 programming interface for reading data. The client library intercepts HDF5 calls and transparently redirects them to the reorganized data. The SDS client library also provides a querying interface for reading part of the data based on user-specified selective criteria. We describe the design and implementation of the SDS client-server architecture, and evaluate the response time of the SDS Server and the performance benefits of SDS.« less
Web-based document image processing
NASA Astrophysics Data System (ADS)
Walker, Frank L.; Thoma, George R.
1999-12-01
Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.
Design and architecture of the Mars relay network planning and analysis framework
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Lee, C. H.
2002-01-01
In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.
Ruan, W; Bürkle, T; Dudeck, J
2000-01-01
In this paper we present a data dictionary server for the automated navigation of information sources. The underlying knowledge is represented within a medical data dictionary. The mapping between medical terms and information sources is based on a semantic network. The key aspect of implementing the dictionary server is how to represent the semantic network in a way that is easier to navigate and to operate, i.e. how to abstract the semantic network and to represent it in memory for various operations. This paper describes an object-oriented design based on Java that represents the semantic network in terms of a group of objects. A node and its relationships to its neighbors are encapsulated in one object. Based on such a representation model, several operations have been implemented. They comprise the extraction of parts of the semantic network which can be reached from a given node as well as finding all paths between a start node and a predefined destination node. This solution is independent of any given layout of the semantic structure. Therefore the module, called Giessen Data Dictionary Server can act independent of a specific clinical information system. The dictionary server will be used to present clinical information, e.g. treatment guidelines or drug information sources to the clinician in an appropriate working context. The server is invoked from clinical documentation applications which contain an infobutton. Automated navigation will guide the user to all the information relevant to her/his topic, which is currently available inside our closed clinical network.
Flexible software architecture for user-interface and machine control in laboratory automation.
Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E
1998-10-01
We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.
Design and implementation of a cloud based lithography illumination pupil processing application
NASA Astrophysics Data System (ADS)
Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie
2017-02-01
Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.
HOXB7 and Hsa-miR-222 as the Potential Therapeutic Candidates for Metastatic Colorectal Cancer.
Iman, Maryam; Mostafavi, Seyede Samaneh; Arab, Seyed Shahriar; Azimzadeh, Sadegh; Poorebrahim, Mansour
2016-01-01
Recent studies have shown that the high mortality of patients with colorectal cancer (CRC) is related to its ability to spread the surrounding tissues, thus there is a need for designing and developing new drugs. Here, we proposed a combinational therapy strategy, an inhibitory peptide in combination with miRNA targeting, for modulating CRC metastasis. In this study, some of the recent patents were also reviewed. After data analysis with GEO2R and gene annotation using DAVID server, regulatory interactions of differentially expressed genes (DEGs) were obtained from STRING, GeneMANIA, KEGG and TRED databases. In parallel, the corresponding validated microRNAs (miRNAs) were obtained from mirDIP web server and a miRNA-DEG regulatory network was also reconstructed. Clustering and topological analyses of the regulatory networks were performed using Cytoscape plug-ins. We found the HOXB family as the most important functional complex in DEG-derived regulatory network. Accordingly, an anti-HOXB7 peptide was designed based on the binding interface of its coactivator, PBX1. Topological analysis of miRNA-DEG network indicated that hsa-miR-222 is one of the most important oncomirs involved in regulation of DEGs activities. Thus, this miRNA, along with HOXB7, was also considered as the potential target for inhibiting CRC metastasis. Molecular docking studies exhibited that the designed peptide can bind to desired binding pocket of HOXB7 in a highaffinity manner. Further confirmations were also observed in Molecular dynamics (MD) simulations carried out by GROMACS v5.0.2 simulation package. In conclusion, our findings suggest that simultaneous targeting of key regulatory genes and miRNAs may be a useful strategy for prevention of CRC metastasis.
EarthServer - 3D Visualization on the Web
NASA Astrophysics Data System (ADS)
Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes
2013-04-01
EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client and on top of HTML5, WebGL and JavaScript we have developed the X3DOM framework (www.x3dom.org), which makes possible to embed declarative X3D scenegraphs, an ISO standard XML-based file format for representing 3D computer graphics, directly within HTML, thus enabling developers to rapidly design 3D content that blends seamlessly into HTML interfaces using Javascript. This approach (commonly referred to as a polyfill layer) is used to mimic native web browser support for declarative 3D content and is an important component in our web client architecture.
Thimmaiah, Tim; Voje, William E; Carothers, James M
2015-01-01
With progress toward inexpensive, large-scale DNA assembly, the demand for simulation tools that allow the rapid construction of synthetic biological devices with predictable behaviors continues to increase. By combining engineered transcript components, such as ribosome binding sites, transcriptional terminators, ligand-binding aptamers, catalytic ribozymes, and aptamer-controlled ribozymes (aptazymes), gene expression in bacteria can be fine-tuned, with many corollaries and applications in yeast and mammalian cells. The successful design of genetic constructs that implement these kinds of RNA-based control mechanisms requires modeling and analyzing kinetically determined co-transcriptional folding pathways. Transcript design methods using stochastic kinetic folding simulations to search spacer sequence libraries for motifs enabling the assembly of RNA component parts into static ribozyme- and dynamic aptazyme-regulated expression devices with quantitatively predictable functions (rREDs and aREDs, respectively) have been described (Carothers et al., Science 334:1716-1719, 2011). Here, we provide a detailed practical procedure for computational transcript design by illustrating a high throughput, multiprocessor approach for evaluating spacer sequences and generating functional rREDs. This chapter is written as a tutorial, complete with pseudo-code and step-by-step instructions for setting up a computational cluster with an Amazon, Inc. web server and performing the large numbers of kinefold-based stochastic kinetic co-transcriptional folding simulations needed to design functional rREDs and aREDs. The method described here should be broadly applicable for designing and analyzing a variety of synthetic RNA parts, devices and transcripts.
An Array Library for Microsoft SQL Server with Astrophysical Applications
NASA Astrophysics Data System (ADS)
Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.
2012-09-01
Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory project will use it to store galaxy simulation data.
AsteriX: a Web server to automatically extract ligand coordinates from figures in PDF articles.
Lounnas, V; Vriend, G
2012-02-27
Coordinates describing the chemical structures of small molecules that are potential ligands for pharmaceutical targets are used at many stages of the drug design process. The coordinates of the vast majority of ligands can be obtained from either publicly accessible or commercial databases. However, interesting ligands sometimes are only available from the scientific literature, in which case their coordinates need to be reconstructed manually--a process that consists of a series of time-consuming steps. We present a Web server that helps reconstruct the three-dimensional (3D) coordinates of ligands for which a two-dimensional (2D) picture is available in a PDF file. The software, called AsteriX, analyses every picture contained in the PDF file and attempts to determine automatically whether or not it contains ligands. Areas in pictures that may contain molecular structures are processed to extract connectivity and atom type information that allow coordinates to be subsequently reconstructed. The AsteriX Web server was tested on a series of articles containing a large diversity in graphical representations. In total, 88% of 3249 ligand structures present in the test set were identified as chemical diagrams. Of these, about half were interpreted correctly as 3D structures, and a further one-third required only minor manual corrections. It is principally impossible to always correctly reconstruct 3D coordinates from pictures because there are many different protocols for drawing a 2D image of a ligand, but more importantly a wide variety of semantic annotations are possible. The AsteriX Web server therefore includes facilities that allow the users to augment partial or partially correct 3D reconstructions. All 3D reconstructions are submitted, checked, and corrected by the users domain at the server and are freely available for everybody. The coordinates of the reconstructed ligands are made available in a series of formats commonly used in drug design research. The AsteriX Web server is freely available at http://swift.cmbi.ru.nl/bitmapb/.
Suplatov, Dmitry; Sharapova, Yana; Timonina, Daria; Kopylov, Kirill; Švedas, Vytas
2018-04-01
The visualCMAT web-server was designed to assist experimental research in the fields of protein/enzyme biochemistry, protein engineering, and drug discovery by providing an intuitive and easy-to-use interface to the analysis of correlated mutations/co-evolving residues. Sequence and structural information describing homologous proteins are used to predict correlated substitutions by the Mutual information-based CMAT approach, classify them into spatially close co-evolving pairs, which either form a direct physical contact or interact with the same ligand (e.g. a substrate or a crystallographic water molecule), and long-range correlations, annotate and rank binding sites on the protein surface by the presence of statistically significant co-evolving positions. The results of the visualCMAT are organized for a convenient visual analysis and can be downloaded to a local computer as a content-rich all-in-one PyMol session file with multiple layers of annotation corresponding to bioinformatic, statistical and structural analyses of the predicted co-evolution, or further studied online using the built-in interactive analysis tools. The online interactivity is implemented in HTML5 and therefore neither plugins nor Java are required. The visualCMAT web-server is integrated with the Mustguseal web-server capable of constructing large structure-guided sequence alignments of protein families and superfamilies using all available information about their structures and sequences in public databases. The visualCMAT web-server can be used to understand the relationship between structure and function in proteins, implemented at selecting hotspots and compensatory mutations for rational design and directed evolution experiments to produce novel enzymes with improved properties, and employed at studying the mechanism of selective ligand's binding and allosteric communication between topologically independent sites in protein structures. The web-server is freely available at https://biokinet.belozersky.msu.ru/visualcmat and there are no login requirements.
NASA Technical Reports Server (NTRS)
Douard, Stephane
1994-01-01
Known as a Graphic Server, the system presented was designed for the control ground segment of the Telecom 2 satellites. It is a tool used to dynamically display telemetry data within graphic pages, also known as views. The views are created off-line through various utilities and then, on the operator's request, displayed and animated in real time as data is received. The system was designed as an independent component, and is installed in different Telecom 2 operational control centers. It enables operators to monitor changes in the platform and satellite payloads in real time. It has been in operation since December 1991.
NASA Astrophysics Data System (ADS)
Ibrahim, Maslina Mohd; Yussup, Nolida; Haris, Mohd Fauzi; Soh @ Shaari, Syirrazie Che; Azman, Azraf; Razalim, Faizal Azrin B. Abdul; Yapp, Raymond; Hasim, Harzawardi; Aslan, Mohd Dzul Aiman
2017-01-01
One of the applications for radiation detector is area monitoring which is crucial for safety especially at a place where radiation source is involved. An environmental radiation monitoring system is a professional system that combines flexibility and ease of use for data collection and monitoring. Nowadays, with the growth of technology, devices and equipment can be connected to the network and Internet to enable online data acquisition. This technology enables data from the area monitoring devices to be transmitted to any place and location directly and faster. In Nuclear Malaysia, area radiation monitor devices are located at several selective locations such as laboratories and radiation facility. This system utilizes an Ethernet as a communication media for data acquisition of the area radiation levels from radiation detectors and stores the data at a server for recording and analysis. This paper discusses on the design and development of website that enable all user in Nuclear Malaysia to access and monitor the radiation level for each radiation detectors at real time online. The web design also included a query feature for history data from various locations online. The communication between the server's software and web server is discussed in detail in this paper.
Vanquelef, Enguerran; Simon, Sabrina; Marquant, Gaelle; Garcia, Elodie; Klimerak, Geoffroy; Delepine, Jean Charles; Cieplak, Piotr; Dupradeau, François-Yves
2011-07-01
R.E.D. Server is a unique, open web service, designed to derive non-polarizable RESP and ESP charges and to build force field libraries for new molecules/molecular fragments. It provides to computational biologists the means to derive rigorously molecular electrostatic potential-based charges embedded in force field libraries that are ready to be used in force field development, charge validation and molecular dynamics simulations. R.E.D. Server interfaces quantum mechanics programs, the RESP program and the latest version of the R.E.D. tools. A two step approach has been developed. The first one consists of preparing P2N file(s) to rigorously define key elements such as atom names, topology and chemical equivalencing needed when building a force field library. Then, P2N files are used to derive RESP or ESP charges embedded in force field libraries in the Tripos mol2 format. In complex cases an entire set of force field libraries or force field topology database is generated. Other features developed in R.E.D. Server include help services, a demonstration, tutorials, frequently asked questions, Jmol-based tools useful to construct PDB input files and parse R.E.D. Server outputs as well as a graphical queuing system allowing any user to check the status of R.E.D. Server jobs.
Design of SIP transformation server for efficient media negotiation
NASA Astrophysics Data System (ADS)
Pack, Sangheon; Paik, Eun Kyoung; Choi, Yanghee
2001-07-01
Voice over IP (VoIP) is one of the advanced services supported by the next generation mobile communication. VoIP should support various media formats and terminals existing together. This heterogeneous environment may prevent diverse users from establishing VoIP sessions among them. To solve the problem an efficient media negotiation mechanism is required. In this paper, we propose the efficient media negotiation architecture using the transformation server and the Intelligent Location Server (ILS). The transformation server is an extended Session Initiation Protocol (SIP) proxy server. It can modify an unacceptable session INVITE message into an acceptable one using the ILS. The ILS is a directory server based on the Lightweight Directory Access Protocol (LDAP) that keeps userí*s location information and available media information. The proposed architecture can eliminate an unnecessary response and re-INVITE messages of the standard SIP architecture. It takes only 1.5 round trip times to negotiate two different media types while the standard media negotiation mechanism takes 2.5 round trip times. The extra processing time in message handling is negligible in comparison to the reduced round trip time. The experimental results show that the session setup time in the proposed architecture is less than the setup time in the standard SIP. These results verify that the proposed media negotiation mechanism is more efficient in solving diversity problems.
Vfold: a web server for RNA structure and folding thermodynamics prediction.
Xu, Xiaojun; Zhao, Peinan; Chen, Shi-Jie
2014-01-01
The ever increasing discovery of non-coding RNAs leads to unprecedented demand for the accurate modeling of RNA folding, including the predictions of two-dimensional (base pair) and three-dimensional all-atom structures and folding stabilities. Accurate modeling of RNA structure and stability has far-reaching impact on our understanding of RNA functions in human health and our ability to design RNA-based therapeutic strategies. The Vfold server offers a web interface to predict (a) RNA two-dimensional structure from the nucleotide sequence, (b) three-dimensional structure from the two-dimensional structure and the sequence, and (c) folding thermodynamics (heat capacity melting curve) from the sequence. To predict the two-dimensional structure (base pairs), the server generates an ensemble of structures, including loop structures with the different intra-loop mismatches, and evaluates the free energies using the experimental parameters for the base stacks and the loop entropy parameters given by a coarse-grained RNA folding model (the Vfold model) for the loops. To predict the three-dimensional structure, the server assembles the motif scaffolds using structure templates extracted from the known PDB structures and refines the structure using all-atom energy minimization. The Vfold-based web server provides a user friendly tool for the prediction of RNA structure and stability. The web server and the source codes are freely accessible for public use at "http://rna.physics.missouri.edu".
Software for Building Models of 3D Objects via the Internet
NASA Technical Reports Server (NTRS)
Schramer, Tim; Jensen, Jeff
2003-01-01
The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.
Xie, Yang; Ying, Jinyong; Xie, Dexuan
2017-03-30
SMPBS (Size Modified Poisson-Boltzmann Solvers) is a web server for computing biomolecular electrostatics using finite element solvers of the size modified Poisson-Boltzmann equation (SMPBE). SMPBE not only reflects ionic size effects but also includes the classic Poisson-Boltzmann equation (PBE) as a special case. Thus, its web server is expected to have a broader range of applications than a PBE web server. SMPBS is designed with a dynamic, mobile-friendly user interface, and features easily accessible help text, asynchronous data submission, and an interactive, hardware-accelerated molecular visualization viewer based on the 3Dmol.js library. In particular, the viewer allows computed electrostatics to be directly mapped onto an irregular triangular mesh of a molecular surface. Due to this functionality and the fast SMPBE finite element solvers, the web server is very efficient in the calculation and visualization of electrostatics. In addition, SMPBE is reconstructed using a new objective electrostatic free energy, clearly showing that the electrostatics and ionic concentrations predicted by SMPBE are optimal in the sense of minimizing the objective electrostatic free energy. SMPBS is available at the URL: smpbs.math.uwm.edu © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Cyber-T web server: differential analysis of high-throughput data.
Kayala, Matthew A; Baldi, Pierre
2012-07-01
The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.
Conchúir, Shane Ó.; Der, Bryan S.; Drew, Kevin; Kuroda, Daisuke; Xu, Jianqing; Weitzner, Brian D.; Renfrew, P. Douglas; Sripakdeevong, Parin; Borgo, Benjamin; Havranek, James J.; Kuhlman, Brian; Kortemme, Tanja; Bonneau, Richard; Gray, Jeffrey J.; Das, Rhiju
2013-01-01
The Rosetta molecular modeling software package provides experimentally tested and rapidly evolving tools for the 3D structure prediction and high-resolution design of proteins, nucleic acids, and a growing number of non-natural polymers. Despite its free availability to academic users and improving documentation, use of Rosetta has largely remained confined to developers and their immediate collaborators due to the code’s difficulty of use, the requirement for large computational resources, and the unavailability of servers for most of the Rosetta applications. Here, we present a unified web framework for Rosetta applications called ROSIE (Rosetta Online Server that Includes Everyone). ROSIE provides (a) a common user interface for Rosetta protocols, (b) a stable application programming interface for developers to add additional protocols, (c) a flexible back-end to allow leveraging of computer cluster resources shared by RosettaCommons member institutions, and (d) centralized administration by the RosettaCommons to ensure continuous maintenance. This paper describes the ROSIE server infrastructure, a step-by-step ‘serverification’ protocol for use by Rosetta developers, and the deployment of the first nine ROSIE applications by six separate developer teams: Docking, RNA de novo, ERRASER, Antibody, Sequence Tolerance, Supercharge, Beta peptide design, NCBB design, and VIP redesign. As illustrated by the number and diversity of these applications, ROSIE offers a general and speedy paradigm for serverification of Rosetta applications that incurs negligible cost to developers and lowers barriers to Rosetta use for the broader biological community. ROSIE is available at http://rosie.rosettacommons.org. PMID:23717507
Efficient monitoring of CRAB jobs at CMS
NASA Astrophysics Data System (ADS)
Silva, J. M. D.; Balcas, J.; Belforte, S.; Ciangottini, D.; Mascheroni, M.; Rupeika, E. A.; Ivanov, T. T.; Hernandez, J. M.; Vaandering, E.
2017-10-01
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.
Next Generation Multimedia Distributed Data Base Systems
NASA Technical Reports Server (NTRS)
Pendleton, Stuart E.
1997-01-01
The paradigm of client/server computing is changing. The model of a server running a monolithic application and supporting clients at the desktop is giving way to a different model that blurs the line between client and server. We are on the verge of plunging into the next generation of computing technology--distributed object-oriented computing. This is not only a change in requirements but a change in opportunities, and requires a new way of thinking for Information System (IS) developers. The information system demands caused by global competition are requiring even more access to decision making tools. Simply, object-oriented technology has been developed to supersede the current design process of information systems which is not capable of handling next generation multimedia.
Efficient Monitoring of CRAB Jobs at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, J. M.D.; Balcas, J.; Belforte, S.
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates themore » design choices and gives a report on our experience with the tools we developed and the external ones we used.« less
Clustalnet: the joining of Clustal and CORBA.
Campagne, F
2000-07-01
Performing sequence alignment operations from a different program than the original sequence alignment code, and/or through a network connection, is often required. Interactive alignment editors and large-scale biological data analysis are common examples where such a flexibility is important. Interoperability between the alignment engine and the client should be obtained regardless of the architectures and programming languages of the server and client. Clustalnet, a Clustal alignment CORBA server is described, which was developed on the basis of Clustalw. This server brings the robustness of the algorithms and implementations of Clustal to a new level of reuse. A Clustalnet server object can be accessed from a program, transparently through the network. We present interfaces to perform the alignment operations and to control these operations via immutable contexts. The interfaces that select the contexts do not depend on the nature of the operation to be performed, making the design modular. The IDL interfaces presented here are not specific to Clustal and can be implemented on top of different sequence alignment algorithm implementations.
Preliminary Results on Design and Implementation of a Solar Radiation Monitoring System
Balan, Mugur C.; Damian, Mihai; Jäntschi, Lorentz
2008-01-01
The paper presents a solar radiation monitoring system, using two scientific pyranometers and an on-line computer home-made data acquisition system. The first pyranometer measures the global solar radiation and the other one, which is shaded, measure the diffuse radiation. The values of total and diffuse solar radiation are continuously stored into a database on a server. Original software was created for data acquisition and interrogation of the created system. The server application acquires the data from pyranometers and stores it into a database with a baud rate of one record at 50 seconds. The client-server application queries the database and provides descriptive statistics. A web interface allow to any user to define the including criteria and to obtain the results. In terms of results, the system is able to provide direct, diffuse and total radiation intensities as time series. Our client-server application computes also derivate heats. The ability of the system to evaluate the local solar energy potential is highlighted. PMID:27879746
Integrating RFID technique to design mobile handheld inventory management system
NASA Astrophysics Data System (ADS)
Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung
2008-04-01
An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.
Pathview Web: user friendly pathway visualization and data integration
Pant, Gaurav; Bhavnasi, Yeshvant K.; Blanchard, Steven G.; Brouwer, Cory
2017-01-01
Abstract Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. PMID:28482075
Vcs.js - Visualization Control System for the Web
NASA Astrophysics Data System (ADS)
Chaudhary, A.; Lipsa, D.; Doutriaux, C.; Beezley, J. D.; Williams, D. N.; Fries, S.; Harris, M. B.
2016-12-01
VCS is a general purpose visualization library, optimized for climate data, which is part of the UV-CDAT system. It provides a Python API for drawing 2D plots such as lineplots, scatter plots, Taylor diagrams, data colored by scalar values, vector glyphs, isocontours and map projections. VCS is based on the VTK library. Vcs.js is the corresponding JavaScript API, designed to be as close as possible to the original VCS Python API and to provide similar functionality for the Web. Vcs.js includes additional functionality when compared with VCS. This additional API is used to introspect data files available on the server and variables available in a data file. Vcs.js can display plots in the browser window. It always works with a server that reads a data file, extracts variables from the file and subsets the data. From this point, two alternate paths are possible. First the system can render the data on the server using VCS producing an image which is send to the browser to be displayed. This path works for for all plot types and produces a reference image identical with the images produced by VCS. This path uses the VTK-Web library. As an optimization, usable in certain conditions, a second path is possible. Data is packed, and sent to the browser which uses a JavaScript plotting library, such as plotly, to display the data. Plots that work well in the browser are line-plots, scatter-plots for any data and many other plot types for small data and supported grid types. As web technology matures, more plots could be supported for rendering in the browser. Rendering can be done either on the client or on the server and we expect that the best place to render will change depending on the available web technology, data transfer costs, server management costs and value provided to users. We intend to provide a flexible solution that allows for both client and server side rendering and a meaningful way to choose between the two. We provide a web-based user interface called vCdat which uses Vcs.js as its visualization library. Our paper will discuss the principles guiding our design choices for Vcs.js, present our design in detail and show a sample usage of the library.
MIIC online: a web server to reconstruct causal or non-causal networks from non-perturbative data.
Sella, Nadir; Verny, Louis; Uguzzoni, Guido; Affeldt, Séverine; Isambert, Hervé
2018-07-01
We present a web server running the MIIC algorithm, a network learning method combining constraint-based and information-theoretic frameworks to reconstruct causal, non-causal or mixed networks from non-perturbative data, without the need for an a priori choice on the class of reconstructed network. Starting from a fully connected network, the algorithm first removes dispensable edges by iteratively subtracting the most significant information contributions from indirect paths between each pair of variables. The remaining edges are then filtered based on their confidence assessment or oriented based on the signature of causality in observational data. MIIC online server can be used for a broad range of biological data, including possible unobserved (latent) variables, from single-cell gene expression data to protein sequence evolution and outperforms or matches state-of-the-art methods for either causal or non-causal network reconstruction. MIIC online can be freely accessed at https://miic.curie.fr. Supplementary data are available at Bioinformatics online.
Chen, Hung-Ming; Liou, Yong-Zan
2014-10-01
In a mobile health management system, mobile devices act as the application hosting devices for personal health records (PHRs) and the healthcare servers construct to exchange and analyze PHRs. One of the most popular PHR standards is continuity of care record (CCR). The CCR is expressed in XML formats. However, parsing is an expensive operation that can degrade XML processing performance. Hence, the objective of this study was to identify different operational and performance characteristics for those CCR parsing models including the XML DOM parser, the SAX parser, the PULL parser, and the JSON parser with regard to JSON data converted from XML-based CCR. Thus, developers can make sensible choices for their target PHR applications to parse CCRs when using mobile devices or servers with different system resources. Furthermore, the simulation experiments of four case studies are conducted to compare the parsing performance on Android mobile devices and the server with large quantities of CCR data.
The Scientific Uplink and User Support System for SIRTF
NASA Astrophysics Data System (ADS)
Heinrichsen, I.; Chavez, J.; Hartley, B.; Mei, Y.; Potts, S.; Roby, T.; Turek, G.; Valjavec, E.; Wu, X.
The Space Infrared Telescope Facility (SIRTF) is one of NASA's Great Observatory missions, scheduled for launch in 2001. As such its ground segment design is driven by the requirement to provide strong support for the entire astronomical community starting with the call for Legacy Proposals in early 2000. In this contribution, we present the astronomical user interface and the design of the server software that comprises the Scientific Uplink System for SIRTF. The software architecture is split into three major parts: A front-end Java application deployed to the astronomical community providing the capabilities to visualize and edit proposals and the associated lists of observations. This observer toolkit provides templates to define all parameters necessary to carry out the required observations. A specialized version of this software, based on the same overall architecture, is used internal to the SIRTF Science Center to prepare calibration and engineering observations. A Weblogic (TM) based middleware component brokers the transactions with the servers, astronomical image and catalog sources as well as the SIRTF operational databases. Several server systems perform the necessary computations, to obtain resource estimates, target visibilities and to access the instrument models for signal to noise calculations. The same server software is used internally at a later stage to derive the detailed command sequences needed by the SIRTF instruments and spacecraft to execute a given observation.
The Impact of Inherent Instructional Design in Online Courseware.
ERIC Educational Resources Information Center
Harvey, Douglas M.; Lee, Jung
2001-01-01
Examines how the use of server-based courseware development solutions affects the instructional design process when creating online distance education. Highlights include pedagogical, visual interface (e.g., visual metaphor and navigation layout), interaction, and instructional design implications of online courseware. (Contains 54 references.)…
Prototyping a 10 Gigabit-Ethernet Event-Builder for the CTA Camera Server
NASA Astrophysics Data System (ADS)
Hoffmann, Dirk; Houles, Julien
2012-12-01
While the Cherenkov Telescope Array will end its Preperatory Phase in 2012 or 2013 with the publication of a Technical Design Report, our lab has undertaken within the french CTA community the design and prototyping of a Camera-Server, which is a PC architecture based computer, used as a switchboard assigned to each of a hundred telescopes to handle a maximum amount of scientific data recorded by each telescope. Our work aims for a data acquisition hardware and software system for the scientific raw data at optimal speed. We have evaluated the maximum performance that can be obtained by choosing standard (COTS) hardware and software (Linux) in conjunction with a 10 Gb/s switch.
ADAGE signature analysis: differential expression analysis with data-defined gene sets.
Tan, Jie; Huyck, Matthew; Hu, Dongbo; Zelaya, René A; Hogan, Deborah A; Greene, Casey S
2017-11-22
Gene set enrichment analysis and overrepresentation analyses are commonly used methods to determine the biological processes affected by a differential expression experiment. This approach requires biologically relevant gene sets, which are currently curated manually, limiting their availability and accuracy in many organisms without extensively curated resources. New feature learning approaches can now be paired with existing data collections to directly extract functional gene sets from big data. Here we introduce a method to identify perturbed processes. In contrast with methods that use curated gene sets, this approach uses signatures extracted from public expression data. We first extract expression signatures from public data using ADAGE, a neural network-based feature extraction approach. We next identify signatures that are differentially active under a given treatment. Our results demonstrate that these signatures represent biological processes that are perturbed by the experiment. Because these signatures are directly learned from data without supervision, they can identify uncurated or novel biological processes. We implemented ADAGE signature analysis for the bacterial pathogen Pseudomonas aeruginosa. For the convenience of different user groups, we implemented both an R package (ADAGEpath) and a web server ( http://adage.greenelab.com ) to run these analyses. Both are open-source to allow easy expansion to other organisms or signature generation methods. We applied ADAGE signature analysis to an example dataset in which wild-type and ∆anr mutant cells were grown as biofilms on the Cystic Fibrosis genotype bronchial epithelial cells. We mapped active signatures in the dataset to KEGG pathways and compared with pathways identified using GSEA. The two approaches generally return consistent results; however, ADAGE signature analysis also identified a signature that revealed the molecularly supported link between the MexT regulon and Anr. We designed ADAGE signature analysis to perform gene set analysis using data-defined functional gene signatures. This approach addresses an important gap for biologists studying non-traditional model organisms and those without extensive curated resources available. We built both an R package and web server to provide ADAGE signature analysis to the community.
NASA Astrophysics Data System (ADS)
Mehring, James W.; Thomas, Scott D.
1995-11-01
The Data Services Segment of the Defense Mapping Agency's Digital Production System provides a digital archive of imagery source data for use by DMA's cartographic user's. This system was developed in the mid-1980's and is currently undergoing modernization. This paper addresses the modernization of the imagery buffer function that was performed by custom hardware in the baseline system and is being replaced by a RAID Server based on commercial off the shelf (COTS) hardware. The paper briefly describes the baseline DMA image system and the modernization program, that is currently under way. Throughput benchmark measurements were made to make design configuration decisions for a commercial off the shelf (COTS) RAID Server to perform as system image buffer. The test program began with performance measurements of the RAID read and write operations between the RAID arrays and the server CPU for RAID levels 0, 5 and 0+1. Interface throughput measurements were made for the HiPPI interface between the RAID Server and the image archive and processing system as well as the client side interface between a custom interface board that provides the interface between the internal bus of the RAID Server and the Input- Output Processor (IOP) external wideband network currently in place in the DMA system to service client workstations. End to end measurements were taken from the HiPPI interface through the RAID write and read operations to the IOP output interface.
Managing Attribute—Value Clinical Trials Data Using the ACT/DB Client—Server Database System
Nadkarni, Prakash M.; Brandt, Cynthia; Frawley, Sandra; Sayward, Frederick G.; Einbinder, Robin; Zelterman, Daniel; Schacter, Lee; Miller, Perry L.
1998-01-01
ACT/DB is a client—server database application for storing clinical trials and outcomes data, which is currently undergoing initial pilot use. It stores most of its data in entity—attribute—value form. Such data are segregated according to data type to allow indexing by value when possible, and binary large object data are managed in the same way as other data. ACT/DB lets an investigator design a study rapidly by defining the parameters (or attributes) that are to be gathered, as well as their logical grouping for purposes of display and data entry. ACT/DB generates customizable data entry. The data can be viewed through several standard reports as well as exported as text to external analysis programs. ACT/DB is designed to encourage reuse of parameters across multiple studies and has facilities for dictionary search and maintenance. It uses a Microsoft Access client running on Windows 95 machines, which communicates with an Oracle server running on a UNIX platform. ACT/DB is being used to manage the data for seven studies in its initial deployment. PMID:9524347
2010-05-01
support multi-server operations, demonstrating the feasibility of the approach. Fourth, it evaluates the prototype to show that performance is reasonable ...architects make many such trade- offs in the course of designing a system. If the architect’s goal is the best possible performance at any cost, then...needs to be transferred, and the unit is reasonably sized (a directory or a small number of directories), the transfer latency can also be small
Experiences with DCE: the pro7 communication server based on OSF-DCE functionality.
Schulte, M; Lordieck, W
1997-01-01
The pro7-communication server is a new approach to manage communication between different applications on different hardware platforms in a hospital environment. The most important features are the use of OSF/DCE for realising remote procedure calls between different platforms, the use of an SQL-92 compatible relational database and the design of a new software development tool (called protocol definition language compiler) for describing the interface of a new application, which is to integrate in a hospital environment.
NASA Technical Reports Server (NTRS)
1994-01-01
This is a draft report on the Government Information Locator Service (GILS) to the National Information Infrastructure (NII) task force. GILS is designed to take advantage of internetworking technology known as client-server architecture which allows information to be distributed among multiple independent information servers. Two appendices are provided -- (1) A glossary of related terminology and (2) extracts from a draft GILS profile for the use of the American National Standard Information Retrieval Application Service Definition and Protocol Specification for Library Applications.
NASA Technical Reports Server (NTRS)
Hein, G. F.
1974-01-01
Special purpose satellites are very cost sensitive to the number of broadcast channels, usually will have Poisson arrivals, fairly low utilization (less than 35%), and a very high availability requirement. To solve the problem of determining the effects of limiting C the number of channels, the Poisson arrival, infinite server queueing model will be modified to describe the many server case. The model is predicated on the reproductive property of the Poisson distribution.
2010-01-01
Symantec Server Antivirus 1 1 1 1 2 7 8 8 Service Passwords 0 10 4 4 4 10 5 5 Banner Needs 0 0 0 0 0 0 0 0 Unauthorized Software 0 1 0 1 4 1 4 1... software needed to manage and operate systems in the testing rooms. Systems in the testing rooms were made to resemble shipboard Navy systems as closely...i.e., work- station and server software , routing and switching, operating systems, and so forth). This training was also designed to provide
General Framework for Animal Food Safety Traceability Using GS1 and RFID
NASA Astrophysics Data System (ADS)
Cao, Weizhu; Zheng, Limin; Zhu, Hong; Wu, Ping
GS1 is global traceability standard, which is composed by the encoding system (EAN/UCC, EPC), the data carriers identified automatically (bar codes, RFID), electronic data interchange standards (EDI, XML). RFID is a non-contact, multi-objective automatic identification technique. Tracing of source food, standardization of RFID tags, sharing of dynamic data are problems to solve urgently for recent traceability systems. The paper designed general framework for animal food safety traceability using GS1 and RFID. This framework uses RFID tags encoding with EPCglobal tag data standards. Each information server has access tier, business tier and resource tier. These servers are heterogeneous and distributed, providing user access interfaces by SOAP or HTTP protocols. For sharing dynamic data, discovery service and object name service are used to locate dynamic distributed information servers.
Multimedia data repository for the World Wide Web
NASA Astrophysics Data System (ADS)
Chen, Ken; Lu, Dajin; Xu, Duanyi
1998-08-01
This paper introduces the design and implementation of a Multimedia Data Repository served as a multimedia information system, which provides users a Web accessible, platform independent interface to query, browse, and retrieve multimedia data such as images, graphics, audio, video from a large multimedia data repository. By integrating the multimedia DBMS, in which the textual information and samples of the multimedia data is organized and stored, and Web server together into the Microsoft ActiveX Server Framework, users can access the DBMS and query the information by simply using a Web browser at the client-side. The original multimedia data can then be located and transmitted through the Internet from the tertiary storage device, a 400 CDROM optical jukebox at the server-side, to the client-side for further use.
AGGRESCAN3D (A3D): server for prediction of aggregation properties of protein structures
Zambrano, Rafael; Jamroz, Michal; Szczasiuk, Agata; Pujols, Jordi; Kmiecik, Sebastian; Ventura, Salvador
2015-01-01
Protein aggregation underlies an increasing number of disorders and constitutes a major bottleneck in the development of therapeutic proteins. Our present understanding on the molecular determinants of protein aggregation has crystalized in a series of predictive algorithms to identify aggregation-prone sites. A majority of these methods rely only on sequence. Therefore, they find difficulties to predict the aggregation properties of folded globular proteins, where aggregation-prone sites are often not contiguous in sequence or buried inside the native structure. The AGGRESCAN3D (A3D) server overcomes these limitations by taking into account the protein structure and the experimental aggregation propensity scale from the well-established AGGRESCAN method. Using the A3D server, the identified aggregation-prone residues can be virtually mutated to design variants with increased solubility, or to test the impact of pathogenic mutations. Additionally, A3D server enables to take into account the dynamic fluctuations of protein structure in solution, which may influence aggregation propensity. This is possible in A3D Dynamic Mode that exploits the CABS-flex approach for the fast simulations of flexibility of globular proteins. The A3D server can be accessed at http://biocomp.chem.uw.edu.pl/A3D/. PMID:25883144
Wang, Xianwen; Liu, Zhiguo; Zhang, Wenchang; Wu, Qingfu; Tan, Shulin
2013-08-01
We have designed a mobile operating room information management system. The system is composed of a client and a server. A client, consisting of a PC, medical equipments, PLC and sensors, provides the acquisition and processing of anesthesia and micro-environment data. A server is a powerful computer that stores the data of the system. The client gathers the medical device data by using the C/S mode, and analyzes the obtained HL7 messages through the class library call. The client collects the micro-environment information with PLC, and finishes the data reading with the OPC technology. Experiment results showed that the designed system could manage the patient anesthesia and micro-environment information well, and improve the efficiency of the doctors' works and the digital level of the mobile operating room.
Software-supported USER cloning strategies for site-directed mutagenesis and DNA assembly.
Genee, Hans Jasper; Bonde, Mads Tvillinggaard; Bagger, Frederik Otzen; Jespersen, Jakob Berg; Sommer, Morten O A; Wernersson, Rasmus; Olsen, Lars Rønn
2015-03-20
USER cloning is a fast and versatile method for engineering of plasmid DNA. We have developed a user friendly Web server tool that automates the design of optimal PCR primers for several distinct USER cloning-based applications. Our Web server, named AMUSER (Automated DNA Modifications with USER cloning), facilitates DNA assembly and introduction of virtually any type of site-directed mutagenesis by designing optimal PCR primers for the desired genetic changes. To demonstrate the utility, we designed primers for a simultaneous two-position site-directed mutagenesis of green fluorescent protein (GFP) to yellow fluorescent protein (YFP), which in a single step reaction resulted in a 94% cloning efficiency. AMUSER also supports degenerate nucleotide primers, single insert combinatorial assembly, and flexible parameters for PCR amplification. AMUSER is freely available online at http://www.cbs.dtu.dk/services/AMUSER/.
CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.
Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali
2016-01-13
Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.
NASA Astrophysics Data System (ADS)
Wang, Jian
2017-01-01
In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.
Risk Assessment of the Naval Postgraduate School Gigabit Network
2004-09-01
Management Server (1) • Ras Server (1) • Remedy Server (1) • Samba Server(2) • SQL Servers (3) • Web Servers (3) • WINS Server (1) • Library...Server Bob Sharp INCA Windows 2000 Advanced Server NPGS Landesk SQL 2000 Alan Pires eagle Microsoft Windows 2000 Advanced Server EWS NPGS Landesk...Advanced Server Special Projects NPGS SQL Alan Pires MC01BDB Microsoft Windows 2000 Advanced Server Special Projects NPGS SQL 2000 Alan Pires
CIAN - Cell Imaging and Analysis Network at the Biology Department of McGill University
Lacoste, J.; Lesage, G.; Bunnell, S.; Han, H.; Küster-Schöck, E.
2010-01-01
CF-31 The Cell Imaging and Analysis Network (CIAN) provides services and tools to researchers in the field of cell biology from within or outside Montreal's McGill University community. CIAN is composed of six scientific platforms: Cell Imaging (confocal and fluorescence microscopy), Proteomics (2-D protein gel electrophoresis and DiGE, fluorescent protein analysis), Automation and High throughput screening (Pinning robot and liquid handler), Protein Expression for Antibody Production, Genomics (real-time PCR), and Data storage and analysis (cluster, server, and workstations). Users submit project proposals, and can obtain training and consultation in any aspect of the facility, or initiate projects with the full-service platforms. CIAN is designed to facilitate training, enhance interactions, as well as share and maintain resources and expertise.
"One-Stop Shopping" for Ocean Remote-Sensing and Model Data
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook
2006-01-01
OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for
Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin
2017-01-21
RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA comparison tools, RNApdist and RNAdistance, showcased that RNA-TVcurve can efficiently capture subtle relationships among RNAs for mutation detection and non-coding RNA classification. All the relevant results were shown in an intuitive graphical manner, and can be freely downloaded from this server. RNA-TVcurve, along with test examples and detailed documents, are available at: http://ml.jlu.edu.cn/tvcurve/ .
Pathview Web: user friendly pathway visualization and data integration.
Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory
2017-07-03
Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
On delay adjustment for dynamic load balancing in distributed virtual environments.
Deng, Yunhua; Lau, Rynson W H
2012-04-01
Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead.
Goldszal, A F; Brown, G K; McDonald, H J; Vucich, J J; Staab, E V
2001-06-01
In this work, we describe the digital imaging network (DIN), picture archival and communication system (PACS), and radiology information system (RIS) currently being implemented at the Clinical Center, National Institutes of Health (NIH). These systems are presently in clinical operation. The DIN is a redundant meshed network designed to address gigabit density and expected high bandwidth requirements for image transfer and server aggregation. The PACS projected workload is 5.0 TB of new imaging data per year. Its architecture consists of a central, high-throughput Digital Imaging and Communications in Medicine (DICOM) data repository and distributed redundant array of inexpensive disks (RAID) servers employing fiber-channel technology for immediate delivery of imaging data. On demand distribution of images and reports to clinicians and researchers is accomplished via a clustered web server. The RIS follows a client-server model and provides tools to order exams, schedule resources, retrieve and review results, and generate management reports. The RIS-hospital information system (HIS) interfaces include admissions, discharges, and transfers (ATDs)/demographics, orders, appointment notifications, doctors update, and results.
Model of load balancing using reliable algorithm with multi-agent system
NASA Astrophysics Data System (ADS)
Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.
2017-04-01
Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.
Internet Distribution of Spacecraft Telemetry Data
NASA Technical Reports Server (NTRS)
Specht, Ted; Noble, David
2006-01-01
Remote Access Multi-mission Processing and Analysis Ground Environment (RAMPAGE) is a Java-language server computer program that enables near-real-time display of spacecraft telemetry data on any authorized client computer that has access to the Internet and is equipped with Web-browser software. In addition to providing a variety of displays of the latest available telemetry data, RAMPAGE can deliver notification of an alarm by electronic mail. Subscribers can then use RAMPAGE displays to determine the state of the spacecraft and formulate a response to the alarm, if necessary. A user can query spacecraft mission data in either binary or comma-separated-value format by use of a Web form or a Practical Extraction and Reporting Language (PERL) script to automate the query process. RAMPAGE runs on Linux and Solaris server computers in the Ground Data System (GDS) of NASA's Jet Propulsion Laboratory and includes components designed specifically to make it compatible with legacy GDS software. The client/server architecture of RAMPAGE and the use of the Java programming language make it possible to utilize a variety of competitive server and client computers, thereby also helping to minimize costs.
Connection Map for Compounds (CMC): A Server for Combinatorial Drug Toxicity and Efficacy Analysis.
Liu, Lei; Tsompana, Maria; Wang, Yong; Wu, Dingfeng; Zhu, Lixin; Zhu, Ruixin
2016-09-26
Drug discovery and development is a costly and time-consuming process with a high risk for failure resulting primarily from a drug's associated clinical safety and efficacy potential. Identifying and eliminating inapt candidate drugs as early as possible is an effective way for reducing unnecessary costs, but limited analytical tools are currently available for this purpose. Recent growth in the area of toxicogenomics and pharmacogenomics has provided with a vast amount of drug expression microarray data. Web servers such as CMap and LTMap have used this information to evaluate drug toxicity and mechanisms of action independently; however, their wider applicability has been limited by the lack of a combinatorial drug-safety type of analysis. Using available genome-wide drug transcriptional expression profiles, we developed the first web server for combinatorial evaluation of toxicity and efficacy of candidate drugs named "Connection Map for Compounds" (CMC). Using CMC, researchers can initially compare their query drug gene signatures with prebuilt gene profiles generated from two large-scale toxicogenomics databases, and subsequently perform a drug efficacy analysis for identification of known mechanisms of drug action or generation of new predictions. CMC provides a novel approach for drug repositioning and early evaluation in drug discovery with its unique combination of toxicity and efficacy analyses, expansibility of data and algorithms, and customization of reference gene profiles. CMC can be freely accessed at http://cadd.tongji.edu.cn/webserver/CMCbp.jsp .
Access Control of Web- and Java-Based Applications
NASA Technical Reports Server (NTRS)
Tso, Kam S.; Pajevski, Michael J.
2013-01-01
Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers
The Value of Web Log Data in Use-based Design and Testing.
ERIC Educational Resources Information Center
Burton, Mary C.; Walther, Joseph B.
2001-01-01
Suggests Web-based logs contain useful empirical data with which World Wide Web designers and design theorists can assess usability and effectiveness of design choices. Enumerates identification of types of Web server logs, client logs, types and uses of log data, and issues associated with the validity of these data. Presents an approach to…
ProbFAST: Probabilistic functional analysis system tool.
Silva, Israel T; Vêncio, Ricardo Z N; Oliveira, Thiago Y K; Molfetta, Greice A; Silva, Wilson A
2010-03-30
The post-genomic era has brought new challenges regarding the understanding of the organization and function of the human genome. Many of these challenges are centered on the meaning of differential gene regulation under distinct biological conditions and can be performed by analyzing the Multiple Differential Expression (MDE) of genes associated with normal and abnormal biological processes. Currently MDE analyses are limited to usual methods of differential expression initially designed for paired analysis. We proposed a web platform named ProbFAST for MDE analysis which uses Bayesian inference to identify key genes that are intuitively prioritized by means of probabilities. A simulated study revealed that our method gives a better performance when compared to other approaches and when applied to public expression data, we demonstrated its flexibility to obtain relevant genes biologically associated with normal and abnormal biological processes. ProbFAST is a free accessible web-based application that enables MDE analysis on a global scale. It offers an efficient methodological approach for MDE analysis of a set of genes that are turned on and off related to functional information during the evolution of a tumor or tissue differentiation. ProbFAST server can be accessed at http://gdm.fmrp.usp.br/probfast.
ProbFAST: Probabilistic Functional Analysis System Tool
2010-01-01
Background The post-genomic era has brought new challenges regarding the understanding of the organization and function of the human genome. Many of these challenges are centered on the meaning of differential gene regulation under distinct biological conditions and can be performed by analyzing the Multiple Differential Expression (MDE) of genes associated with normal and abnormal biological processes. Currently MDE analyses are limited to usual methods of differential expression initially designed for paired analysis. Results We proposed a web platform named ProbFAST for MDE analysis which uses Bayesian inference to identify key genes that are intuitively prioritized by means of probabilities. A simulated study revealed that our method gives a better performance when compared to other approaches and when applied to public expression data, we demonstrated its flexibility to obtain relevant genes biologically associated with normal and abnormal biological processes. Conclusions ProbFAST is a free accessible web-based application that enables MDE analysis on a global scale. It offers an efficient methodological approach for MDE analysis of a set of genes that are turned on and off related to functional information during the evolution of a tumor or tissue differentiation. ProbFAST server can be accessed at http://gdm.fmrp.usp.br/probfast. PMID:20353576
NASA Astrophysics Data System (ADS)
Kapulin, D. V.; Chemidov, I. V.; Kazantsev, M. A.
2017-01-01
In the paper, the aspects of design, development and implementation of the automated control system for warehousing under the manufacturing process of the radio-electronic enterprise JSC «Radiosvyaz» are discussed. The architecture of the automated control system for warehousing proposed in the paper consists of a server which is connected to the physically separated information networks: the network with a database server, which stores information about the orders for picking, and the network with the automated storage and retrieval system. This principle allows implementing the requirements for differentiation of access, ensuring the information safety and security requirements. Also, the efficiency of the developed automated solutions in terms of optimizing the warehouse’s logistic characteristics is researched.
Minimizing Wide-Area Performance Disruptions in Inter-Domain Routing
2011-09-01
Servers As another example, we saw the average round-trip time double for an ISP in Malaysia . The RTT increase was caused by a traffic shift to different... censorship , conduct wiretapping, or offer poor performance. This is achieved by applying regular expressions to the AS-PATH to assign lower preference
Antony, Joby; Mathuria, D S; Datta, T S; Maity, Tanmoy
2015-12-01
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW(®). This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.
NASA Astrophysics Data System (ADS)
Antony, Joby; Mathuria, D. S.; Datta, T. S.; Maity, Tanmoy
2015-12-01
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW®. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antony, Joby; Mathuria, D. S.; Datta, T. S.
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similarmore » control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as “CADS,” which stands for “Complete Automation of Distribution System.” CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW{sup ®}. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.« less
An architecture for real-time vision processing
NASA Technical Reports Server (NTRS)
Chien, Chiun-Hong
1994-01-01
To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.
AtlasCBS: a web server to map and explore chemico-biological space
NASA Astrophysics Data System (ADS)
Cortés-Cabrera, Álvaro; Morreale, Antonio; Gago, Federico; Abad-Zapatero, Celerino
2012-09-01
New approaches are needed that can help decrease the unsustainable failure in small-molecule drug discovery. Ligand Efficiency Indices (LEI) are making a great impact on early-stage compound selection and prioritization. Given a target-ligand database with chemical structures and associated biological affinities/activities for a target, the AtlasCBS server generates two-dimensional, dynamical representations of its contents in terms of LEI. These variables allow an effective decoupling of the chemical (angular) and biological (radial) components. BindingDB, PDBBind and ChEMBL databases are currently implemented. Proprietary datasets can also be uploaded and compared. The utility of this atlas-like representation in the future of drug design is highlighted with some examples. The web server can be accessed at http://ub.cbm.uam.es/atlascbs and https://www.ebi.ac.uk/chembl/atlascbs.
Lu, Yanrong; Li, Lixiang; Yang, Xing; Yang, Yixian
2015-01-01
Biometrics authenticated schemes using smart cards have attracted much attention in multi-server environments. Several schemes of this type where proposed in the past. However, many of them were found to have some design flaws. This paper concentrates on the security weaknesses of the three-factor authentication scheme by Mishra et al. After careful analysis, we find their scheme does not really resist replay attack while failing to provide an efficient password change phase. We further propose an improvement of Mishra et al.'s scheme with the purpose of preventing the security threats of their scheme. We demonstrate the proposed scheme is given to strong authentication against several attacks including attacks shown in the original scheme. In addition, we compare the performance and functionality with other multi-server authenticated key schemes.
Lu, Yanrong; Li, Lixiang; Yang, Xing; Yang, Yixian
2015-01-01
Biometrics authenticated schemes using smart cards have attracted much attention in multi-server environments. Several schemes of this type where proposed in the past. However, many of them were found to have some design flaws. This paper concentrates on the security weaknesses of the three-factor authentication scheme by Mishra et al. After careful analysis, we find their scheme does not really resist replay attack while failing to provide an efficient password change phase. We further propose an improvement of Mishra et al.’s scheme with the purpose of preventing the security threats of their scheme. We demonstrate the proposed scheme is given to strong authentication against several attacks including attacks shown in the original scheme. In addition, we compare the performance and functionality with other multi-server authenticated key schemes. PMID:25978373
AtlasCBS: a web server to map and explore chemico-biological space.
Cortés-Cabrera, Alvaro; Morreale, Antonio; Gago, Federico; Abad-Zapatero, Celerino
2012-09-01
New approaches are needed that can help decrease the unsustainable failure in small-molecule drug discovery. Ligand Efficiency Indices (LEI) are making a great impact on early-stage compound selection and prioritization. Given a target-ligand database with chemical structures and associated biological affinities/activities for a target, the AtlasCBS server generates two-dimensional, dynamical representations of its contents in terms of LEI. These variables allow an effective decoupling of the chemical (angular) and biological (radial) components. BindingDB, PDBBind and ChEMBL databases are currently implemented. Proprietary datasets can also be uploaded and compared. The utility of this atlas-like representation in the future of drug design is highlighted with some examples. The web server can be accessed at http://ub.cbm.uam.es/atlascbs and https://www.ebi.ac.uk/chembl/atlascbs.
NASA Technical Reports Server (NTRS)
Muhsin, Mansour; Walters, Ian
2004-01-01
The Document Concurrence System is a combination of software modules for routing users expressions of concurrence with documents. This system enables determination of the current status of concurrences and eliminates the need for the prior practice of manually delivering paper documents to all persons whose approvals were required. This system runs on a server, and participants gain access via personal computers equipped with Web-browser and electronic-mail software. A user can begin a concurrence routing process by logging onto an administration module, naming the approvers and stating the sequence for routing among them, and attaching documents. The server then sends a message to the first person on the list. Upon concurrence by the first person, the system sends a message to the second person, and so forth. A person on the list indicates approval, places the documents on hold, or indicates disapproval, via a Web-based module. When the last person on the list has concurred, a message is sent to the initiator, who can then finalize the process through the administration module. A background process running on the server identifies concurrence processes that are overdue and sends reminders to the appropriate persons.
GlobAl Distribution of GEnetic Traits (GADGET) web server: polygenic trait scores worldwide.
Chande, Aroon T; Wang, Lu; Rishishwar, Lavanya; Conley, Andrew B; Norris, Emily T; Valderrama-Aguirre, Augusto; Jordan, I King
2018-05-18
Human populations from around the world show striking phenotypic variation across a wide variety of traits. Genome-wide association studies (GWAS) are used to uncover genetic variants that influence the expression of heritable human traits; accordingly, population-specific distributions of GWAS-implicated variants may shed light on the genetic basis of human phenotypic diversity. With this in mind, we developed the GlobAl Distribution of GEnetic Traits web server (GADGET http://gadget.biosci.gatech.edu). The GADGET web server provides users with a dynamic visual platform for exploring the relationship between worldwide genetic diversity and the genetic architecture underlying numerous human phenotypes. GADGET integrates trait-implicated single nucleotide polymorphisms (SNPs) from GWAS, with population genetic data from the 1000 Genomes Project, to calculate genome-wide polygenic trait scores (PTS) for 818 phenotypes in 2504 individual genomes. Population-specific distributions of PTS are shown for 26 human populations across 5 continental population groups, with traits ordered based on the extent of variation observed among populations. Users of GADGET can also upload custom trait SNP sets to visualize global PTS distributions for their own traits of interest.
TAM 2.0: tool for MicroRNA set analysis.
Li, Jianwei; Han, Xiaofen; Wan, Yanping; Zhang, Shan; Zhao, Yingshu; Fan, Rui; Cui, Qinghua; Zhou, Yuan
2018-06-06
With the rapid accumulation of high-throughput microRNA (miRNA) expression profile, the up-to-date resource for analyzing the functional and disease associations of miRNAs is increasingly demanded. We here describe the updated server TAM 2.0 for miRNA set enrichment analysis. Through manual curation of over 9000 papers, a more than two-fold growth of reference miRNA sets has been achieved in comparison with previous TAM, which covers 9945 and 1584 newly collected miRNA-disease and miRNA-function associations, respectively. Moreover, TAM 2.0 allows users not only to test the functional and disease annotations of miRNAs by overrepresentation analysis, but also to compare the input de-regulated miRNAs with those de-regulated in other disease conditions via correlation analysis. Finally, the functions for miRNA set query and result visualization are also enabled in the TAM 2.0 server to facilitate the community. The TAM 2.0 web server is freely accessible at http://www.scse.hebut.edu.cn/tam/ or http://www.lirmed.com/tam2/.
GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application
NASA Technical Reports Server (NTRS)
McGuire, Melissa L.; Kunkel, Matthew R.; Smith, David A.
2010-01-01
The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.
The UMLS Knowledge Source Server: an experience in Web 2.0 technologies.
Thorn, Karen E; Bangalore, Anantha K; Browne, Allen C
2007-10-11
The UMLS Knowledge Source Server (UMLSKS), developed at the National Library of Medicine (NLM), makes the knowledge sources of the Unified Medical Language System (UMLS) available to the research community over the Internet. In 2003, the UMLSKS was redesigned utilizing state-of-the-art technologies available at that time. That design offered a significant improvement over the prior version but presented a set of technology-dependent issues that limited its functionality and usability. Four areas of desired improvement were identified: software interfaces, web interface content, system maintenance/deployment, and user authentication. By employing next generation web technologies, newer authentication paradigms and further refinements in modular design methods, these areas could be addressed and corrected to meet the ever increasing needs of UMLSKS developers. In this paper we detail the issues present with the existing system and describe the new system's design using new technologies considered entrants in the Web 2.0 development era.
Visualization of historical data for the ATLAS detector controls - DDV
NASA Astrophysics Data System (ADS)
Maciejewski, J.; Schlenker, S.
2017-10-01
The ATLAS experiment is one of four detectors located on the Large Hardon Collider (LHC) based at CERN. Its detector control system (DCS) stores the slow control data acquired within the back-end of distributed WinCC OA applications, which enables the data to be retrieved for future analysis, debugging and detector development in an Oracle relational database. The ATLAS DCS Data Viewer (DDV) is a client-server application providing access to the historical data outside of the experiment network. The server builds optimized SQL queries, retrieves the data from the database and serves it to the clients via HTTP connections. The server also implements protection methods to prevent malicious use of the database. The client is an AJAX-type web application based on the Vaadin (framework build around the Google Web Toolkit (GWT)) which gives users the possibility to access the data with ease. The DCS metadata can be selected using a column-tree navigation or a search engine supporting regular expressions. The data is visualized by a selection of output modules such as a java script value-over time plots or a lazy loading table widget. Additional plugins give the users the possibility to retrieve the data in ROOT format or as an ASCII file. Control system alarms can also be visualized in a dedicated table if necessary. Python mock-up scripts can be generated by the client, allowing the user to query the pythonic DDV server directly, such that the users can embed the scripts into more complex analysis programs. Users are also able to store searches and output configurations as XML on the server to share with others via URL or to embed in HTML.
2014-01-01
Background The advent of human genome sequencing project has led to a spurt in the number of protein sequences in the databanks. Success of structure based drug discovery severely hinges on the availability of structures. Despite significant progresses in the area of experimental protein structure determination, the sequence-structure gap is continually widening. Data driven homology based computational methods have proved successful in predicting tertiary structures for sequences sharing medium to high sequence similarities. With dwindling similarities of query sequences, advanced homology/ ab initio hybrid approaches are being explored to solve structure prediction problem. Here we describe Bhageerath-H, a homology/ ab initio hybrid software/server for predicting protein tertiary structures with advancing drug design attempts as one of the goals. Results Bhageerath-H web-server was validated on 75 CASP10 targets which showed TM-scores ≥0.5 in 91% of the cases and Cα RMSDs ≤5Å from the native in 58% of the targets, which is well above the CASP10 water mark. Comparison with some leading servers demonstrated the uniqueness of the hybrid methodology in effectively sampling conformational space, scoring best decoys and refining low resolution models to high and medium resolution. Conclusion Bhageerath-H methodology is web enabled for the scientific community as a freely accessible web server. The methodology is fielded in the on-going CASP11 experiment. PMID:25521245
Implementation of Medical Information Exchange System Based on EHR Standard
Han, Soon Hwa; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong
2010-01-01
Objectives To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. Methods To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. Results The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. Conclusions This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information. PMID:21818447
NASA Astrophysics Data System (ADS)
Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.
2017-11-01
The objective of this paper is to analyse state dependent arrival in bulk retrial queueing system with immediate Bernoulli feedback, multiple vacations, threshold and constant retrial policy. Primary customers are arriving into the system in bulk with different arrival rates λ a and λ b . If arriving customers find the server is busy then the entire batch will join to orbit. Customer from orbit request service one by one with constant retrial rate γ. On the other hand if an arrival of customers finds the server is idle then customers will be served in batches according to general bulk service rule. After service completion, customers may request service again with probability δ as feedback or leave from the system with probability 1 - δ. In the service completion epoch, if the orbit size is zero then the server leaves for multiple vacations. The server continues the vacation until the orbit size reaches the value ‘N’ (N > b). At the vacation completion, if the orbit size is ‘N’ then the server becomes ready to provide service for customers from the main pool or from the orbit. For the designed queueing model, probability generating function of the queue size at an arbitrary time will be obtained by using supplementary variable technique. Various performance measures will be derived with suitable numerical illustrations.
AGGRESCAN3D (A3D): server for prediction of aggregation properties of protein structures.
Zambrano, Rafael; Jamroz, Michal; Szczasiuk, Agata; Pujols, Jordi; Kmiecik, Sebastian; Ventura, Salvador
2015-07-01
Protein aggregation underlies an increasing number of disorders and constitutes a major bottleneck in the development of therapeutic proteins. Our present understanding on the molecular determinants of protein aggregation has crystalized in a series of predictive algorithms to identify aggregation-prone sites. A majority of these methods rely only on sequence. Therefore, they find difficulties to predict the aggregation properties of folded globular proteins, where aggregation-prone sites are often not contiguous in sequence or buried inside the native structure. The AGGRESCAN3D (A3D) server overcomes these limitations by taking into account the protein structure and the experimental aggregation propensity scale from the well-established AGGRESCAN method. Using the A3D server, the identified aggregation-prone residues can be virtually mutated to design variants with increased solubility, or to test the impact of pathogenic mutations. Additionally, A3D server enables to take into account the dynamic fluctuations of protein structure in solution, which may influence aggregation propensity. This is possible in A3D Dynamic Mode that exploits the CABS-flex approach for the fast simulations of flexibility of globular proteins. The A3D server can be accessed at http://biocomp.chem.uw.edu.pl/A3D/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Implementation of Medical Information Exchange System Based on EHR Standard.
Han, Soon Hwa; Lee, Min Ho; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong
2010-12-01
To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information.
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Mathew; Bowen, Brian; Coles, Dwight
The Middleware Automated Deployment Utilities consists the these three components: MAD: Utility designed to automate the deployment of java applications to multiple java application servers. The product contains a front end web utility and backend deployment scripts. MAR: Web front end to maintain and update the components inside database. MWR-Encrypt: Web utility to convert a text string to an encrypted string that is used by the Oracle Weblogic application server. The encryption is done using the built in functions if the Oracle Weblogic product and is mainly used to create an encrypted version of a database password.
The D3 Middleware Architecture
NASA Technical Reports Server (NTRS)
Walton, Joan; Filman, Robert E.; Korsmeyer, David J.; Lee, Diana D.; Mak, Ron; Patel, Tarang
2002-01-01
DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid-dynamics) model executions. DARWIN captures, stores and indexes data; manages derived knowledge (such as visualizations across multiple datasets); and provides an environment for designers to collaborate in the analysis of test results. DARWIN is an interesting application because it supports high-volumes of data. integrates multiple modalities of data display (e.g., images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and views of data. Here we provide an overview of the architecture of D3, the third generation of DARWIN. Earlier versions of DARWIN were characterized by browser-based interfaces and a hodge-podge of server technologies: CGI scripts, applets, PERL, and so forth. But browsers proved difficult to control, and a proliferation of computational mechanisms proved inefficient and difficult to maintain. D3 substitutes a pure-Java approach for that medley: A Java client communicates (though RMI over HTTPS) with a Java-based application server. Code on the server accesses information from JDBC databases, distributed LDAP security services, and a collaborative information system. D3 is a three tier-architecture, but unlike 'E-commerce' applications, the data usage pattern suggests different strategies than traditional Enterprise Java Beans - we need to move volumes of related data together, considerable processing happens on the client, and the 'business logic' on the server-side is primarily data integration and collaboration. With D3, we are extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user transparent access to test results from multiple servers and authority domains.
A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality
NASA Astrophysics Data System (ADS)
Wang, Manyi; Liu, Chaoshun; Gao, Wei
2014-10-01
An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.
Construction of a multimedia application on public network
NASA Astrophysics Data System (ADS)
Liu, Jang; Wang, Chwan-Huei; Tseng, Ming-Yu; Hsiao, Sun-Lang; Luo, Wen-Hen; Tseng, Yung-Mean; Hung, Feng-Yue
1994-04-01
This paper describes our perception of current developments in networking, telecommunication and technology of multimedia. As such, we have taken a constructive view. From this standpoint, we devised a client server architecture that veils servers from their customers. It adheres to our conviction that network and location independence for serve access is a future trend. We have constructed an on-line KARAOKE on an existing CVS (Chinese Videotex System) to test the workability of this architecture and it works well. We are working on a prototype multimedia service network which is a miniature client server structure of our proposal. A specially designed protocol is described. Through this protocol, an one-to-many connection can be set up and to provide for multimedia applications, new connections can be established within a basic connection. So continuous media may have their own connections without being interrupted by other media, at least from the view of an application. We have advanced a constructive view which is not a framework itself. But it is tantamount to a framework, in building systems as assembly of methods, technics, designs, and ideas. This is what a framework does with more flexibility and availability.
Bumm, Klaus; Zheng, Mingzhong; Bailey, Clyde; Zhan, Fenghuang; Chiriva-Internati, M; Eddlemon, Paul; Terry, Julian; Barlogie, Bart; Shaughnessy, John D
2002-02-01
Clinical GeneOrganizer (CGO) is a novel windows-based archiving, organization and data mining software for the integration of gene expression profiling in clinical medicine. The program implements various user-friendly tools and extracts data for further statistical analysis. This software was written for Affymetrix GeneChip *.txt files, but can also be used for any other microarray-derived data. The MS-SQL server version acts as a data mart and links microarray data with clinical parameters of any other existing database and therefore represents a valuable tool for combining gene expression analysis and clinical disease characteristics.
Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less
Designing of peptides with desired half-life in intestine-like environment.
Sharma, Arun; Singla, Deepak; Rashid, Mamoon; Raghava, Gajendra Pal Singh
2014-08-20
In past, a number of peptides have been reported to possess highly diverse properties ranging from cell penetrating, tumor homing, anticancer, anti-hypertensive, antiviral to antimicrobials. Owing to their excellent specificity, low-toxicity, rich chemical diversity and availability from natural sources, FDA has successfully approved a number of peptide-based drugs and several are in various stages of drug development. Though peptides are proven good drug candidates, their usage is still hindered mainly because of their high susceptibility towards proteases degradation. We have developed an in silico method to predict the half-life of peptides in intestine-like environment and to design better peptides having optimized physicochemical properties and half-life. In this study, we have used 10mer (HL10) and 16mer (HL16) peptides dataset to develop prediction models for peptide half-life in intestine-like environment. First, SVM based models were developed on HL10 dataset which achieved maximum correlation R/R2 of 0.57/0.32, 0.68/0.46, and 0.69/0.47 using amino acid, dipeptide and tripeptide composition, respectively. Secondly, models developed on HL16 dataset showed maximum R/R2 of 0.91/0.82, 0.90/0.39, and 0.90/0.31 using amino acid, dipeptide and tripeptide composition, respectively. Furthermore, models that were developed on selected features, achieved a correlation (R) of 0.70 and 0.98 on HL10 and HL16 dataset, respectively. Preliminary analysis suggests the role of charged residue and amino acid size in peptide half-life/stability. Based on above models, we have developed a web server named HLP (Half Life Prediction), for predicting and designing peptides with desired half-life. The web server provides three facilities; i) half-life prediction, ii) physicochemical properties calculation and iii) designing mutant peptides. In summary, this study describes a web server 'HLP' that has been developed for assisting scientific community for predicting intestinal half-life of peptides and to design mutant peptides with better half-life and physicochemical properties. HLP models were trained using a dataset of peptides whose half-lives have been determined experimentally in crude intestinal proteases preparation. Thus, HLP server will help in designing peptides possessing the potential to be administered via oral route (http://www.imtech.res.in/raghava/hlp/).
Liu, Baozhen; Liu, Zhiguo; Wang, Xianwen
2015-06-01
A mobile operating room information management system with electronic medical record (EMR) is designed to improve work efficiency and to enhance the patient information sharing. In the operating room, this system acquires the information from various medical devices through the Client/Server (C/S) pattern, and automatically generates XML-based EMR. Outside the operating room, this system provides information access service by using the Browser/Server (B/S) pattern. Software test shows that this system can correctly collect medical information from equipment and clearly display the real-time waveform. By achieving surgery records with higher quality and sharing the information among mobile medical units, this system can effectively reduce doctors' workload and promote the information construction of the field hospital.
Park, Byeonghyeok; Baek, Min-Jeong; Min, Byoungnam; Choi, In-Geol
2017-09-01
Genome annotation is a primary step in genomic research. To establish a light and portable prokaryotic genome annotation pipeline for use in individual laboratories, we developed a Shiny app package designated as "P-CAPS" (Prokaryotic Contig Annotation Pipeline Server). The package is composed of R and Python scripts that integrate publicly available annotation programs into a server application. P-CAPS is not only a browser-based interactive application but also a distributable Shiny app package that can be installed on any personal computer. The final annotation is provided in various standard formats and is summarized in an R markdown document. Annotation can be visualized and examined with a public genome browser. A benchmark test showed that the annotation quality and completeness of P-CAPS were reliable and compatible with those of currently available public pipelines.
The design and implementation of web mining in web sites security
NASA Astrophysics Data System (ADS)
Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li
2003-06-01
The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.
A simple tool for neuroimaging data sharing
Haselgrove, Christian; Poline, Jean-Baptiste; Kennedy, David N.
2014-01-01
Data sharing is becoming increasingly common, but despite encouragement and facilitation by funding agencies, journals, and some research efforts, most neuroimaging data acquired today is still not shared due to political, financial, social, and technical barriers to sharing data that remain. In particular, technical solutions are few for researchers that are not a part of larger efforts with dedicated sharing infrastructures, and social barriers such as the time commitment required to share can keep data from becoming publicly available. We present a system for sharing neuroimaging data, designed to be simple to use and to provide benefit to the data provider. The system consists of a server at the International Neuroinformatics Coordinating Facility (INCF) and user tools for uploading data to the server. The primary design principle for the user tools is ease of use: the user identifies a directory containing Digital Imaging and Communications in Medicine (DICOM) data, provides their INCF Portal authentication, and provides identifiers for the subject and imaging session. The user tool anonymizes the data and sends it to the server. The server then runs quality control routines on the data, and the data and the quality control reports are made public. The user retains control of the data and may change the sharing policy as they need. The result is that in a few minutes of the user’s time, DICOM data can be anonymized and made publicly available, and an initial quality control assessment can be performed on the data. The system is currently functional, and user tools and access to the public image database are available at http://xnat.incf.org/. PMID:24904398
Decentralized session initiation protocol solution in ad hoc networks
NASA Astrophysics Data System (ADS)
Han, Lu; Jin, Zhigang; Shu, Yantai; Dong, Linfang
2006-10-01
With the fast development of ad hoc networks, SIP has attracted more and more attention in multimedia service. This paper proposes a new architecture to provide SIP service for ad hoc users, although there is no centralized SIP server deployed. In this solution, we provide the SIP service by the introduction of two nodes: Designated SIP Server (DS) and its Backup Server (BDS). The nodes of ad hoc network designate DS and BDS when they join the session nodes set and when some pre-defined events occur. A new sip message type called REGISTRAR is presented so nodes can send others REGISTRAR message to declare they want to be DS. According to the IP information taken in the message, an algorithm works like the election of DR and BDR in OSPF protocol is used to vote DS and BDS SIP servers. Naturally, the DS will be replaced by BDS when the DS is down for predicable or unpredictable reasons. To facilitate this, the DS should register to the BDS and transfer a backup of the SIP users' database. Considering the possibility DS or BDS may abruptly go down, a special policy is given. When there is no DS and BDS, a new election procedure is triggered just like the startup phase. The paper also describes how SIP works normally in the decentralized model as well as the evaluation of its performance. All sessions based on SIP in ad hoc such as DS voting have been tested in the real experiments within a 500m*500m square area where about 30 random nodes are placed.
Liu, Ruiling; Bohac, David L; Gundel, Lara A; Hewett, Martha J; Apte, Michael G; Hammond, S Katharine
2014-01-01
Background Despite efforts to reduce exposure to secondhand smoke (SHS), only 5% of the world's population enjoy smoke-free restaurants and bars. Methods Lifetime excess risk (LER) of cancer death, ischaemic heart disease (IHD) death and asthma initiation among non-smoking restaurant and bar servers and patrons in Minnesota and the US were estimated using weighted field measurements of SHS constituents in Minnesota, existing data on tobacco use and multiple dose-response models. Results A continuous approach estimated a LER of lung cancer death (LCD) of 18×10−6(95% CI 13 to 23×10−6) for patrons visiting only designated non-smoking sections, 80×10−6(95% CI 66 to 95×10−6) for patrons visiting only smoking venues/sections and 802×10−6(95% CI 658 to 936×10−6) for servers in smoking-permitted venues. An attributable-risk (exposed/non-exposed) approach estimated a similar LER of LCD, a LER of IHD death about 10−2 for non-smokers with average SHS exposure from all sources and a LER of asthma initiation about 5% for servers with SHS exposure at work only. These risks correspond to 214 LCDs and 3001 IHD deaths among the general non-smoking population and 1420 new asthma cases among non-smoking servers in the US each year due to SHS exposure in restaurants and bars alone. Conclusions Health risks for patrons and servers from SHS exposure in restaurants and bars alone are well above the acceptable level. Restaurants and bars should be a priority for governments’ effort to create smoke-free environments and should not be exempt from smoking bans. PMID:23407112
PELE web server: atomistic study of biomolecular systems at your fingertips.
Madadkar-Sobhani, Armin; Guallar, Victor
2013-07-01
PELE, Protein Energy Landscape Exploration, our novel technology based on protein structure prediction algorithms and a Monte Carlo sampling, is capable of modelling the all-atom protein-ligand dynamical interactions in an efficient and fast manner, with two orders of magnitude reduced computational cost when compared with traditional molecular dynamics techniques. PELE's heuristic approach generates trial moves based on protein and ligand perturbations followed by side chain sampling and global/local minimization. The collection of accepted steps forms a stochastic trajectory. Furthermore, several processors may be run in parallel towards a collective goal or defining several independent trajectories; the whole procedure has been parallelized using the Message Passing Interface. Here, we introduce the PELE web server, designed to make the whole process of running simulations easier and more practical by minimizing input file demand, providing user-friendly interface and producing abstract outputs (e.g. interactive graphs and tables). The web server has been implemented in C++ using Wt (http://www.webtoolkit.eu) and MySQL (http://www.mysql.com). The PELE web server, accessible at http://pele.bsc.es, is free and open to all users with no login requirement.
NASA Astrophysics Data System (ADS)
Satheendran, S.; John, C. M.; Fasalul, F. K.; Aanisa, K. M.
2014-11-01
Web geoservices is the obvious graduation of Geographic Information System in a distributed environment through a simple browser. It enables organizations to share domain-specific rich and dynamic spatial information over the web. The present study attempted to design and develop a web enabled GIS application for the School of Environmental Sciences, Mahatma Gandhi University, Kottayam, Kerala, India to publish various geographical databases to the public through its website. The development of this project is based upon the open source tools and techniques. The output portal site is platform independent. The premier webgis frame work `Geomoose' is utilized. Apache server is used as the Web Server and the UMN Map Server is used as the map server for this project. It provides various customised tools to query the geographical database in different ways and search for various facilities in the geographical area like banks, attractive places, hospitals, hotels etc. The portal site was tested with the output geographical database of 2 projects of the School such as 1) the Tourism Information System for the Malabar region of Kerala State consisting of 5 northern districts 2) the geoenvironmental appraisal of the Athirappilly Hydroelectric Project covering the entire Chalakkudy river basin.
Quality of service policy control in virtual private networks
NASA Astrophysics Data System (ADS)
Yu, Yiqing; Wang, Hongbin; Zhou, Zhi; Zhou, Dongru
2004-04-01
This paper studies the QoS of VPN in an environment where the public network prices connection-oriented services based on source, destination and grade of service, and advertises these prices to its VPN customers (users). As different QoS technologies can produce different QoS, there are according different traffic classification rules and priority rules. The internet service provider (ISP) may need to build complex mechanisms separately for each node. In order to reduce the burden of network configuration, we need to design policy control technologies. We considers mainly directory server, policy server, policy manager and policy enforcers. Policy decision point (PDP) decide its control according to policy rules. In network, policy enforce point (PEP) decide its network controlled unit. For InterServ and DiffServ, we will adopt different policy control methods as following: (1) In InterServ, traffic uses resource reservation protocol (RSVP) to guarantee the network resource. (2) In DiffServ, policy server controls the DiffServ code points and per hop behavior (PHB), its PDP distributes information to each network node. Policy server will function as following: information searching; decision mechanism; decision delivering; auto-configuration. In order to prove the effectiveness of QoS policy control, we make the corrective simulation.
Hardware Assisted Stealthy Diversity (CHECKMATE)
2013-09-01
applicable across multiple architectures. Figure 29 shows an example an attack against an interpreted environment with a Java executable. CHECKMATE can...Architectures ARM PPCx86 Java VM Java VMJava VM Java Executable Attack APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 33 a user executes “/usr/bin/wget...Server 1 - Administration Server 2 – Database ( mySQL ) Server 3 – Web server (Mongoose) Server 4 – File server (SSH) Server 5 – Email server
The Four Levels of Web Site Development Expertise.
ERIC Educational Resources Information Center
Ingram, Albert L.
2000-01-01
Discusses the design of Web pages and sites and proposes a four-level model of Web development expertise that can serve as a curriculum overview or as a plan for an individual's professional development. Highlights include page design, media use, client-side processing, server-side processing, and site structure. (LRW)
Conducting and Supporting a Goal-Based Scenario Learning Environment.
ERIC Educational Resources Information Center
Montgomery, Joel; And Others
1994-01-01
Discussion of goal-based scenario (GBS) learning environments focuses on a training module designed to prepare consultants with new skills in managing clients, designing user-friendly graphical computer interfaces, and working in a client/server computing environment. Transforming the environment from teaching focused to learning focused is…
The Air Force Academy Instructor Workstation (IWS): I. Design and Implementation.
ERIC Educational Resources Information Center
Gist, Thomas E.; And Others
1989-01-01
Discusses the design and implementation of a computer-controlled instructor workstation (IWS), including a videodisc player, that was developed at the Air Force Academy. System capabilities for lesson presentation, administrative functions, an authoring system, and a file server for courseware maintenance are explained. (seven references) (LRW)
Designing a Virtual-Reality-Based, Gamelike Math Learning Environment
ERIC Educational Resources Information Center
Xu, Xinhao; Ke, Fengfeng
2016-01-01
This exploratory study examined the design issues related to a virtual-reality-based, gamelike learning environment (VRGLE) developed via OpenSimulator, an open-source virtual reality server. The researchers collected qualitative data to examine the VRGLE's usability, playability, and content integration for math learning. They found it important…
DOE Office of Scientific and Technical Information (OSTI.GOV)
The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less
Server-Side JavaScript Debugging: Viewing the Contents of an Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, J.; Simons, R.
1999-04-21
JavaScript allows the definition and use of large, complex objects. Unlike some other object-oriented languages, it also allows run-time modifications not only of the values of object components, but also of the very structure of the object itself. This feature is powerful and sometimes very convenient, but it can be difficult to keep track of the object's structure and values throughout program execution. What's needed is a simple way to view the current state of an object at any point during execution. There is a debug function that is included in the Netscape server-side JavaScript environment. The function outputs themore » value(s) of the expression given as the argument to the function in the JavaScript Application Manager's debug window [SSJS].« less
Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Kumar, Neeraj
2015-11-01
In the last few years, numerous remote user authentication and session key agreement schemes have been put forwarded for Telecare Medical Information System, where the patient and medical server exchange medical information using Internet. We have found that most of the schemes are not usable for practical applications due to known security weaknesses. It is also worth to note that unrestricted number of patients login to the single medical server across the globe. Therefore, the computation and maintenance overhead would be high and the server may fail to provide services. In this article, we have designed a medical system architecture and a standard mutual authentication scheme for single medical server, where the patient can securely exchange medical data with the doctor(s) via trusted central medical server over any insecure network. We then explored the security of the scheme with its resilience to attacks. Moreover, we formally validated the proposed scheme through the simulation using Automated Validation of Internet Security Schemes and Applications software whose outcomes confirm that the scheme is protected against active and passive attacks. The performance comparison demonstrated that the proposed scheme has lower communication cost than the existing schemes in literature. In addition, the computation cost of the proposed scheme is nearly equal to the exiting schemes. The proposed scheme not only efficient in terms of different security attacks, but it also provides an efficient login, mutual authentication, session key agreement and verification and password update phases along with password recovery.
Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Obaidat, Mohammad S
2015-11-01
In order to access remote medical server, generally the patients utilize smart card to login to the server. It has been observed that most of the user (patient) authentication protocols suffer from smart card stolen attack that means the attacker can mount several common attacks after extracting smart card information. Recently, Lu et al.'s proposes a session key agreement protocol between the patient and remote medical server and claims that the same protocol is secure against relevant security attacks. However, this paper presents several security attacks on Lu et al.'s protocol such as identity trace attack, new smart card issue attack, patient impersonation attack and medical server impersonation attack. In order to fix the mentioned security pitfalls including smart card stolen attack, this paper proposes an efficient remote mutual authentication protocol using smart card. We have then simulated the proposed protocol using widely-accepted AVISPA simulation tool whose results make certain that the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. Moreover, the rigorous security analysis proves that the proposed protocol provides strong security protection on the relevant security attacks including smart card stolen attack. We compare the proposed scheme with several related schemes in terms of computation cost and communication cost as well as security functionalities. It has been observed that the proposed scheme is comparatively better than related existing schemes.
NASA Technical Reports Server (NTRS)
Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)
1998-01-01
The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.
BEAM web server: a tool for structural RNA motif discovery.
Pietrosanto, Marco; Adinolfi, Marta; Casula, Riccardo; Ausiello, Gabriele; Ferrè, Fabrizio; Helmer-Citterich, Manuela
2018-03-15
RNA structural motif finding is a relevant problem that becomes computationally hard when working on high-throughput data (e.g. eCLIP, PAR-CLIP), often represented by thousands of RNA molecules. Currently, the BEAM server is the only web tool capable to handle tens of thousands of RNA in input with a motif discovery procedure that is only limited by the current secondary structure prediction accuracies. The recently developed method BEAM (BEAr Motifs finder) can analyze tens of thousands of RNA molecules and identify RNA secondary structure motifs associated to a measure of their statistical significance. BEAM is extremely fast thanks to the BEAR encoding that transforms each RNA secondary structure in a string of characters. BEAM also exploits the evolutionary knowledge contained in a substitution matrix of secondary structure elements, extracted from the RFAM database of families of homologous RNAs. The BEAM web server has been designed to streamline data pre-processing by automatically handling folding and encoding of RNA sequences, giving users a choice for the preferred folding program. The server provides an intuitive and informative results page with the list of secondary structure motifs identified, the logo of each motif, its significance, graphic representation and information about its position in the RNA molecules sharing it. The web server is freely available at http://beam.uniroma2.it/ and it is implemented in NodeJS and Python with all major browsers supported. marco.pietrosanto@uniroma2.it. Supplementary data are available at Bioinformatics online.
SFESA: a web server for pairwise alignment refinement by secondary structure shifts.
Tong, Jing; Pei, Jimin; Grishin, Nick V
2015-09-03
Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.
An ontology-based telemedicine tasks management system architecture.
Nageba, Ebrahim; Fayn, Jocelyne; Rubel, Paul
2008-01-01
The recent developments in ambient intelligence and ubiquitous computing offer new opportunities for the design of advanced Telemedicine systems providing high quality services, anywhere, anytime. In this paper we present an approach for building an ontology-based task-driven telemedicine system. The architecture is composed of a task management server, a communication server and a knowledge base for enabling decision makings taking account of different telemedical concepts such as actors, resources, services and the Electronic Health Record. The final objective is to provide an intelligent management of the different types of available human, material and communication resources.
TRSkit: A Simple Digital Library Toolkit
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Esler, Sandra L.
1997-01-01
This paper introduces TRSkit, a simple and effective toolkit for building digital libraries on the World Wide Web. The toolkit was developed for the creation of the Langley Technical Report Server and the NASA Technical Report Server, but is applicable to most simple distribution paradigms. TRSkit contains a handful of freely available software components designed to be run under the UNIX operating system and served via the World Wide Web. The intended customer is the person that must continuously and synchronously distribute anywhere from 100 - 100,000's of information units and does not have extensive resources to devote to the problem.
Intelligent Virtual Station (IVS)
NASA Technical Reports Server (NTRS)
2002-01-01
The Intelligent Virtual Station (IVS) is enabling the integration of design, training, and operations capabilities into an intelligent virtual station for the International Space Station (ISS). A viewgraph of the IVS Remote Server is presented.
MODster: Namespaces and Redirection for Earth Science Data
NASA Astrophysics Data System (ADS)
Frew, J.; Metzger, D.; Slaughter, P.
2005-12-01
MODster is a distributed, decentralized inventory server for Earth science data granules (standard units of data content and structure.) MODster connects data granule users (people who know which specific granule they want, but who don't know who has it or how to get it) with data granule providers (people or institutions that keep granules accessible online.) * If you're a provider, you can tell MODster which granules you have and where they live (i.e., their URLs.) * If you're a user, you can ask MODster for a granule, and it will transparently redirect your request to whomever has it. The key to making this work is a standard granule namespace. A granule namespace is a naming convention that associates particular names with particular granules, regardless of where those granules live. Different Earth science data products have their own granule namespaces. For example, in the MODIS granule namespace, the granule name "MOD43A2.A1998365.h5.v8.001.1999001090020.hdf" always refers to version 1 of the 5th horizontal and 8th vertical tile of the Level 3 16-day Bi-directional Reflectance Distribution Function product, acquired by the MODIS Terra sensor on 31 December 1998 and generated on 01 January 1999 at 9:00:20 AM. A MODster URL is simply a standard way of referring to a data product namespace and one of its granules. MODster URLs have the general form "http://server/namespace/granule" where "granule" is a granule name that conforms to a granule namespace, "namespace" is a MODster namespace, which is the name of a granule namespace whose conventions are known to MODster, and "server" is a MODster server, which is an HTTP server that can redirect namespace/granule requests to granule providers. A MODster URL with no granule component gets a description of the MODster namespace, its authority (the persons or institutions responsible for documenting and maintaining the naming convention), and also any services for that MODster namespace that the MODster server supports. Our current MODster implementation allows granule providers to explicitly register their granules, and can also crawl provider sites looking for granules whose names match specific rules or regular expressions.
Network characteristics for server selection in online games
NASA Astrophysics Data System (ADS)
Claypool, Mark
2008-01-01
Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.
CID-miRNA: A web server for prediction of novel miRNA precursors in human genome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyagi, Sonika; Vaz, Candida; Gupta, Vipin
2008-08-08
microRNAs (miRNA) are a class of non-protein coding functional RNAs that are thought to regulate expression of target genes by direct interaction with mRNAs. miRNAs have been identified through both experimental and computational methods in a variety of eukaryotic organisms. Though these approaches have been partially successful, there is a need to develop more tools for detection of these RNAs as they are also thought to be present in abundance in many genomes. In this report we describe a tool and a web server, named CID-miRNA, for identification of miRNA precursors in a given DNA sequence, utilising secondary structure-based filteringmore » systems and an algorithm based on stochastic context free grammar trained on human miRNAs. CID-miRNA analyses a given sequence using a web interface, for presence of putative miRNA precursors and the generated output lists all the potential regions that can form miRNA-like structures. It can also scan large genomic sequences for the presence of potential miRNA precursors in its stand-alone form. The web server can be accessed at (http://mirna.jnu.ac.in/cidmirna/)« less
Home media server content management
NASA Astrophysics Data System (ADS)
Tokmakoff, Andrew A.; van Vliet, Harry
2001-07-01
With the advent of set-top boxes, the convergence of TV (broadcasting) and PC (Internet) is set to enter the home environment. Currently, a great deal of activity is occurring in developing standards (TV-Anytime Forum) and devices (TiVo) for local storage on Home Media Servers (HMS). These devices lie at the heart of convergence of the triad: communications/networks - content/media - computing/software. Besides massive storage capacity and being a communications 'gateway', the home media server is characterised by the ability to handle metadata and software that provides an easy to use on-screen interface and intelligent search/content handling facilities. In this paper, we describe a research prototype HMS that is being developed within the GigaCE project at the Telematica Instituut . Our prototype demonstrates advanced search and retrieval (video browsing), adaptive user profiling and an innovative 3D component of the Electronic Program Guide (EPG) which represents online presence. We discuss the use of MPEG-7 for representing metadata, the use of MPEG-21 working draft standards for content identification, description and rights expression, and the use of HMS peer-to-peer content distribution approaches. Finally, we outline explorative user behaviour experiments that aim to investigate the effectiveness of the prototype HMS during development.
COMAN: a web server for comprehensive metatranscriptomics analysis.
Ni, Yueqiong; Li, Jun; Panagiotou, Gianni
2016-08-11
Microbiota-oriented studies based on metagenomic or metatranscriptomic sequencing have revolutionised our understanding on microbial ecology and the roles of both clinical and environmental microbes. The analysis of massive metatranscriptomic data requires extensive computational resources, a collection of bioinformatics tools and expertise in programming. We developed COMAN (Comprehensive Metatranscriptomics Analysis), a web-based tool dedicated to automatically and comprehensively analysing metatranscriptomic data. COMAN pipeline includes quality control of raw reads, removal of reads derived from non-coding RNA, followed by functional annotation, comparative statistical analysis, pathway enrichment analysis, co-expression network analysis and high-quality visualisation. The essential data generated by COMAN are also provided in tabular format for additional analysis and integration with other software. The web server has an easy-to-use interface and detailed instructions, and is freely available at http://sbb.hku.hk/COMAN/ CONCLUSIONS: COMAN is an integrated web server dedicated to comprehensive functional analysis of metatranscriptomic data, translating massive amount of reads to data tables and high-standard figures. It is expected to facilitate the researchers with less expertise in bioinformatics in answering microbiota-related biological questions and to increase the accessibility and interpretation of microbiota RNA-Seq data.
Dhanyalakshmi, K H; Naika, Mahantesha B N; Sajeevan, R S; Mathew, Oommen K; Shafi, K Mohamed; Sowdhamini, Ramanathan; N Nataraja, Karaba
2016-01-01
The modern sequencing technologies are generating large volumes of information at the transcriptome and genome level. Translation of this information into a biological meaning is far behind the race due to which a significant portion of proteins discovered remain as proteins of unknown function (PUFs). Attempts to uncover the functional significance of PUFs are limited due to lack of easy and high throughput functional annotation tools. Here, we report an approach to assign putative functions to PUFs, identified in the transcriptome of mulberry, a perennial tree commonly cultivated as host of silkworm. We utilized the mulberry PUFs generated from leaf tissues exposed to drought stress at whole plant level. A sequence and structure based computational analysis predicted the probable function of the PUFs. For rapid and easy annotation of PUFs, we developed an automated pipeline by integrating diverse bioinformatics tools, designated as PUFs Annotation Server (PUFAS), which also provides a web service API (Application Programming Interface) for a large-scale analysis up to a genome. The expression analysis of three selected PUFs annotated by the pipeline revealed abiotic stress responsiveness of the genes, and hence their potential role in stress acclimation pathways. The automated pipeline developed here could be extended to assign functions to PUFs from any organism in general. PUFAS web server is available at http://caps.ncbs.res.in/pufas/ and the web service is accessible at http://capservices.ncbs.res.in/help/pufas.
Sealife: a semantic grid browser for the life sciences applied to the study of infectious diseases.
Schroeder, Michael; Burger, Albert; Kostkova, Patty; Stevens, Robert; Habermann, Bianca; Dieng-Kuntz, Rose
2006-01-01
The objective of Sealife is the conception and realisation of a semantic Grid browser for the life sciences, which will link the existing Web to the currently emerging eScience infrastructure. The SeaLife Browser will allow users to automatically link a host of Web servers and Web/Grid services to the Web content he/she is visiting. This will be accomplished using eScience's growing number of Web/Grid Services and its XML-based standards and ontologies. The browser will identify terms in the pages being browsed through the background knowledge held in ontologies. Through the use of Semantic Hyperlinks, which link identified ontology terms to servers and services, the SeaLife Browser will offer a new dimension of context-based information integration. In this paper, we give an overview over the different components of the browser and their interplay. This SeaLife Browser will be demonstrated within three application scenarios in evidence-based medicine, literature & patent mining, and molecular biology, all relating to the study of infectious diseases. The three applications vertically integrate the molecule/cell, the tissue/organ and the patient/population level by covering the analysis of high-throughput screening data for endocytosis (the molecular entry pathway into the cell), the expression of proteins in the spatial context of tissue and organs, and a high-level library on infectious diseases designed for clinicians and their patients. For more information see http://www.biote.ctu-dresden.de/sealife.
A Conditions Data Management System for HEP Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laycock, P. J.; Dykstra, D.; Formica, A.
Conditions data infrastructure for both ATLAS and CMS have to deal with the management of several Terabytes of data. Distributed computing access to this data requires particular care and attention to manage request-rates of up to several tens of kHz. Thanks to the large overlap in use cases and requirements, ATLAS and CMS have worked towards a common solution for conditions data management with the aim of using this design for data-taking in Run 3. In the meantime other experiments, including NA62, have expressed an interest in this cross- experiment initiative. For experiments with a smaller payload volume and complexity,more » there is particular interest in simplifying the payload storage. The conditions data management model is implemented in a small set of relational database tables. A prototype access toolkit consisting of an intermediate web server has been implemented, using standard technologies available in the Java community. Access is provided through a set of REST services for which the API has been described in a generic way using standard Open API specications, implemented in Swagger. Such a solution allows the automatic generation of client code and server stubs and further allows changes in the backend technology transparently. An important advantage of using a REST API for conditions access is the possibility of caching identical URLs, addressing one of the biggest challenges that large distributed computing solutions impose on conditions data access, avoiding direct DB access by means of standard web proxy solutions.« less
BIO-Plex Information System Concept
NASA Technical Reports Server (NTRS)
Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)
1999-01-01
This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.
Advanced Engineering Environment FY09/10 pilot project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamph, Jane Ann; Kiba, Grant W.; Pomplun, Alan R.
2010-06-01
The Advanced Engineering Environment (AEE) project identifies emerging engineering environment tools and assesses their value to Sandia National Laboratories and our partners in the Nuclear Security Enterprise (NSE) by testing them in our design environment. This project accomplished several pilot activities, including: the preliminary definition of an engineering bill of materials (BOM) based product structure in the Windchill PDMLink 9.0 application; an evaluation of Mentor Graphics Data Management System (DMS) application for electrical computer-aided design (ECAD) library administration; and implementation and documentation of a Windchill 9.1 application upgrade. The project also supported the migration of legacy data from existing corporatemore » product lifecycle management systems into new classified and unclassified Windchill PDMLink 9.0 systems. The project included two infrastructure modernization efforts: the replacement of two aging AEE development servers for reliable platforms for ongoing AEE project work; and the replacement of four critical application and license servers that support design and engineering work at the Sandia National Laboratories/California site.« less
76 FR 62431 - Notice of Issuance of Final Determination Concerning Certain Ethernet Switches
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-07
... importing 7 Series Ethernet switches assembled in China. The switches are designed to interconnect servers... country of origin of Arista's 7048, 7050, 7100, 7124, and 7500 series (``7 Series'') local area network..., packaged, and prepared for shipping. Arista's EOS TM (Extensible Operating System) software is designed to...
NASA Technical Reports Server (NTRS)
Dhaliwal, Swarn S.
1997-01-01
An investigation was undertaken to build the software foundation for the WHERE (Web-based Hyper-text Environment for Requirements Engineering) project. The TCM (Toolkit for Conceptual Modeling) was chosen as the foundation software for the WHERE project which aims to provide an environment for facilitating collaboration among geographically distributed people involved in the Requirements Engineering process. The TCM is a collection of diagram and table editors and has been implemented in the C++ programming language. The C++ implementation of the TCM was translated into Java in order to allow the editors to be used for building various functionality of the WHERE project; the WHERE project intends to use the Web as its communication back- bone. One of the limitations of the translated software (TcmJava), which militated against its use in the WHERE project, was persistent data management mechanisms which it inherited from the original TCM; it was designed to be used in standalone applications. Before TcmJava editors could be used as a part of the multi-user, geographically distributed applications of the WHERE project, a persistent storage mechanism must be built which would allow data communication over the Internet, using the capabilities of the Web. An approach involving features of Java, CORBA (Common Object Request Broker), the Web, a middle-ware (Java Relational Binding (JRB)), and a database server was used to build the persistent data management infrastructure for the WHERE project. The developed infrastructure allows a TcmJava editor to be downloaded and run from a network host by using a JDK 1.1 (Java Developer's Kit) compatible Web-browser. The aforementioned editor establishes connection with a server by using the ORB (Object Request Broker) software and stores/retrieves data in/from the server. The server consists of a CORBA object or objects depending upon whether the data is to be made persistent on a single server or multiple servers. The CORBA object providing the persistent data server is implemented using the Java progranu-ning language. It uses the JRB to store/retrieve data in/from a relational database server. The persistent data management system provides transaction and user management facilities which allow multi-user, distributed access to the stored data in a secure manner.
MAGMA: analysis of two-channel microarrays made easy.
Rehrauer, Hubert; Zoller, Stefan; Schlapbach, Ralph
2007-07-01
The web application MAGMA provides a simple and intuitive interface to identify differentially expressed genes from two-channel microarray data. While the underlying algorithms are not superior to those of similar web applications, MAGMA is particularly user friendly and can be used without prior training. The user interface guides the novice user through the most typical microarray analysis workflow consisting of data upload, annotation, normalization and statistical analysis. It automatically generates R-scripts that document MAGMA's entire data processing steps, thereby allowing the user to regenerate all results in his local R installation. The implementation of MAGMA follows the model-view-controller design pattern that strictly separates the R-based statistical data processing, the web-representation and the application logic. This modular design makes the application flexible and easily extendible by experts in one of the fields: statistical microarray analysis, web design or software development. State-of-the-art Java Server Faces technology was used to generate the web interface and to perform user input processing. MAGMA's object-oriented modular framework makes it easily extendible and applicable to other fields and demonstrates that modern Java technology is also suitable for rather small and concise academic projects. MAGMA is freely available at www.magma-fgcz.uzh.ch.
2009-01-01
Oracle 9i, 10g MySQL MS SQL Server MS SQL Server Operating System Supported Windows 2003 Server Windows 2000 Server (32 bit...WebStar (Mac OS X) SunOne Internet Information Services (IIS) Database Server Supported MS SQL Server MS SQL Server Oracle 9i, 10g...challenges of Web-based surveys are: 1) identifying the best Commercial Off the Shelf (COTS) Web-based survey packages to serve the particular
Measurement of Energy Performances for General-Structured Servers
NASA Astrophysics Data System (ADS)
Liu, Ren; Chen, Lili; Li, Pengcheng; Liu, Meng; Chen, Haihong
2017-11-01
Energy consumption of servers in data centers increases rapidly along with the wide application of Internet and connected devices. To improve the energy efficiency of servers, voluntary or mandatory energy efficiency programs for servers, including voluntary label program or mandatory energy performance standards have been adopted or being prepared in the US, EU and China. However, the energy performance of servers and testing methods of servers are not well defined. This paper presents matrices to measure the energy performances of general-structured servers. The impacts of various components of servers on their energy performances are also analyzed. Based on a set of normalized workload, the author proposes a standard method for testing energy efficiency of servers. Pilot tests are conducted to assess the energy performance testing methods of servers. The findings of the tests are discussed in the paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahern, S D
2003-06-10
We describe Merlot, a system for delivery of digital imagery over high speed networks. We describe various use cases, the client/server interaction, and the image and network codecs. We also describe some possible applications using Merlot and future work.
NASA Astrophysics Data System (ADS)
Adamczewski-Musch, Joern; Linev, Sergey
2015-12-01
The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.
Fault-tolerant back-up archive using an ASP model for disaster recovery
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Huang, H. K.; Cao, Fei; Documet, Luis; Sarti, Dennis A.
2002-05-01
A single point of failure in PACS during a disaster scenario is the main archive storage and server. When a major disaster occurs, it is possible to lose an entire hospital's PACS data. Few current PACS archives feature disaster recovery, but the design is limited at best. These drawbacks include the frequency with which the back-up is physically removed to an offsite facility, the operational costs associated to maintain the back-up, the ease-of-use to perform the backup consistently and efficiently, and the ease-of-use to perform the PACS image data recovery. This paper describes a novel approach towards a fault-tolerant solution for disaster recovery of short-term PACS image data using an Application Service Provider model for service. The ASP back-up archive provides instantaneous, automatic backup of acquired PACS image data and instantaneous recovery of stored PACS image data all at a low operational cost. A back-up archive server and RAID storage device is implemented offsite from the main PACS archive location. In the example of this particular hospital, it was determined that at least 2 months worth of PACS image exams were needed for back-up. Clinical data from a hospital PACS is sent to this ASP storage server in parallel to the exams being archived in the main server. A disaster scenario was simulated and the PACS exams were sent from the offsite ASP storage server back to the hospital PACS. Initially, connectivity between the main archive and the ASP storage server is established via a T-1 connection. In the future, other more cost-effective means of connectivity will be researched such as the Internet 2. A disaster scenario was initiated and the disaster recovery process using the ASP back-up archive server was success in repopulating the clinical PACS within a short period of time. The ASP back-up archive was able to recover two months of PACS image data for comparison studies with no complex operational procedures. Furthermore, no image data loss was encountered during the recovery.
NASA Astrophysics Data System (ADS)
Suchacka, Grazyna
2005-02-01
The paper concerns a new research area that is Quality of Web Service (QoWS). The need for QoWS is motivated by a still growing number of Internet users, by a steady development and diversification of Web services, and especially by popularization of e-commerce applications. The goal of the paper is a critical analysis of the literature concerning scheduling algorithms for e-commerce Web servers. The paper characterizes factors affecting the load of the Web servers and discusses ways of improving their efficiency. Crucial QoWS requirements of the business Web server are identified: serving requests before their individual deadlines, supporting user session integrity, supporting different classes of users and minimizing a number of rejected requests. It is justified that meeting these requirements and implementing them in an admission control (AC) and scheduling algorithm for the business Web server is crucial to the functioning of e-commerce Web sites and revenue generated by them. The paper presents results of the literature analysis and discusses algorithms that implement these important QoWS requirements. The analysis showed that very few algorithms take into consideration the above mentioned factors and that there is a need for designing an algorithm implementing them.
Smart Cards and remote entrusting
NASA Astrophysics Data System (ADS)
Aussel, Jean-Daniel; D'Annoville, Jerome; Castillo, Laurent; Durand, Stephane; Fabre, Thierry; Lu, Karen; Ali, Asad
Smart cards are widely used to provide security in end-to-end communication involving servers and a variety of terminals, including mobile handsets or payment terminals. Sometime, end-to-end server to smart card security is not applicable, and smart cards must communicate directly with an application executing on a terminal, like a personal computer, without communicating with a server. In this case, the smart card must somehow trust the terminal application before performing some secure operation it was designed for. This paper presents a novel method to remotely trust a terminal application from the smart card. For terminals such as personal computers, this method is based on an advanced secure device connected through the USB and consisting of a smart card bundled with flash memory. This device, or USB dongle, can be used in the context of remote untrusting to secure portable applications conveyed in the dongle flash memory. White-box cryptography is used to set the secure channel and a mechanism based on thumbprint is described to provide external authentication when session keys need to be renewed. Although not as secure as end-to-end server to smart card security, remote entrusting with smart cards is easy to deploy for mass-market applications and can provide a reasonable level of security.
NASA Astrophysics Data System (ADS)
Boenisch, Holger; Froitzheim, Konrad
1999-12-01
The transfer of live media streams such as video and audio over the Internet is subject to several problems, static and dynamic by nature. Important quality of service (QoS) parameters do not only differ between various receivers depending on their network access, service provider, and nationality, the QoS is also variable in time. Moreover the installed receiver base is heterogeneous with respect to operating system, browser or client software, and browser version. We present a new concept for serving live media streams. It is not longer based on the current one-size-fits all paradigm, where the server offers just one stream. Our compresslet system takes the opposite approach: it builds media streams `to order' and `just in time'. Every client subscribing to a media stream uses a servlet loaded into the media server to generate a tailored data stream for his resources and constraints. The server is designed such that commonly used components for media streams are computed once. The compresslets use these prefabricated components, code additional data if necessary, and construct the data stream based on the dynamic available QoS and other client constraints. A client-specific encoding leads to resource- optimal presentation that is especially useful for the presentation of complex multimedia documents on a variety of output devices.
Alignment-Annotator web server: rendering and annotating sequence alignments.
Gille, Christoph; Fähling, Michael; Weyand, Birgit; Wieland, Thomas; Gille, Andreas
2014-07-01
Alignment-Annotator is a novel web service designed to generate interactive views of annotated nucleotide and amino acid sequence alignments (i) de novo and (ii) embedded in other software. All computations are performed at server side. Interactivity is implemented in HTML5, a language native to web browsers. The alignment is initially displayed using default settings and can be modified with the graphical user interfaces. For example, individual sequences can be reordered or deleted using drag and drop, amino acid color code schemes can be applied and annotations can be added. Annotations can be made manually or imported (BioDAS servers, the UniProt, the Catalytic Site Atlas and the PDB). Some edits take immediate effect while others require server interaction and may take a few seconds to execute. The final alignment document can be downloaded as a zip-archive containing the HTML files. Because of the use of HTML the resulting interactive alignment can be viewed on any platform including Windows, Mac OS X, Linux, Android and iOS in any standard web browser. Importantly, no plugins nor Java are required and therefore Alignment-Anotator represents the first interactive browser-based alignment visualization. http://www.bioinformatics.org/strap/aa/ and http://strap.charite.de/aa/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Alignment-Annotator web server: rendering and annotating sequence alignments
Gille, Christoph; Fähling, Michael; Weyand, Birgit; Wieland, Thomas; Gille, Andreas
2014-01-01
Alignment-Annotator is a novel web service designed to generate interactive views of annotated nucleotide and amino acid sequence alignments (i) de novo and (ii) embedded in other software. All computations are performed at server side. Interactivity is implemented in HTML5, a language native to web browsers. The alignment is initially displayed using default settings and can be modified with the graphical user interfaces. For example, individual sequences can be reordered or deleted using drag and drop, amino acid color code schemes can be applied and annotations can be added. Annotations can be made manually or imported (BioDAS servers, the UniProt, the Catalytic Site Atlas and the PDB). Some edits take immediate effect while others require server interaction and may take a few seconds to execute. The final alignment document can be downloaded as a zip-archive containing the HTML files. Because of the use of HTML the resulting interactive alignment can be viewed on any platform including Windows, Mac OS X, Linux, Android and iOS in any standard web browser. Importantly, no plugins nor Java are required and therefore Alignment-Anotator represents the first interactive browser-based alignment visualization. Availability: http://www.bioinformatics.org/strap/aa/ and http://strap.charite.de/aa/. PMID:24813445
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Bower, J.C.; Burnett, R.A.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less
Federal Emergency Management Information System (FEMIS), Installation Guide for FEMIS 1.4.6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less
Federal Emergency Management Information System (FEMIS) system administration guide. Version 1.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Downing, T.R.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and analysis tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide defines FEMIS hardware and software requirements and gives instructions for installing the FEMIS software package. This document also contains information on the following: software installation for the FEMIS data servers, communication server, mail server, and the emergency management workstations; distribution media loading and FEMIS installation validation and troubleshooting; and system management of FEMIS users, login privileges, and usage. Themore » system administration utilities (tools), available in the FEMIS client software, are described for user accounts and site profile. This document also describes the installation and use of system and database administration utilities that will assist in keeping the FEMIS system running in an operational environment. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local area network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via telecommunications links.« less
A Browser-Server-Based Tele-audiology System That Supports Multiple Hearing Test Modalities.
Yao, Jianchu Jason; Yao, Daoyuan; Givens, Gregg
2015-09-01
Millions of global citizens suffering from hearing disorders have limited or no access to much needed hearing healthcare. Although tele-audiology presents a solution to alleviate this problem, existing remote hearing diagnosis systems support only pure-tone tests, leaving speech and other test procedures unsolved, due to the lack of software and hardware to enable communication required between audiologists and their remote patients. This article presents a comprehensive remote hearing test system that integrates the two most needed hearing test procedures: a pure-tone audiogram and a speech test. This enhanced system is composed of a Web application server, an embedded smart Internet-Bluetooth(®) (Bluetooth SIG, Kirkland, WA) gateway (or console device), and a Bluetooth-enabled audiometer. Several graphical user interfaces and a relational database are hosted on the application server. The console device has been designed to support the tests and auxiliary communication between the local site and the remote site. The study was conducted at an audiology laboratory. Pure-tone audiogram and speech test results from volunteers tested with this tele-audiology system are comparable with results from the traditional face-to-face approach. This browser-server-based comprehensive tele-audiology offers a flexible platform to expand hearing services to traditionally underserved groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angel, L.K.; Bower, J.C.; Burnett, R.A.
1999-06-29
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less
SCOPE: a web server for practical de novo motif discovery.
Carlson, Jonathan M; Chakravarty, Arijit; DeZiel, Charles E; Gross, Robert H
2007-07-01
SCOPE is a novel parameter-free method for the de novo identification of potential regulatory motifs in sets of coordinately regulated genes. The SCOPE algorithm combines the output of three component algorithms, each designed to identify a particular class of motifs. Using an ensemble learning approach, SCOPE identifies the best candidate motifs from its component algorithms. In tests on experimentally determined datasets, SCOPE identified motifs with a significantly higher level of accuracy than a number of other web-based motif finders run with their default parameters. Because SCOPE has no adjustable parameters, the web server has an intuitive interface, requiring only a set of gene names or FASTA sequences and a choice of species. The most significant motifs found by SCOPE are displayed graphically on the main results page with a table containing summary statistics for each motif. Detailed motif information, including the sequence logo, PWM, consensus sequence and specific matching sites can be viewed through a single click on a motif. SCOPE's efficient, parameter-free search strategy has enabled the development of a web server that is readily accessible to the practising biologist while providing results that compare favorably with those of other motif finders. The SCOPE web server is at
Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan
2016-01-01
Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.
Characteristics and Energy Use of Volume Servers in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuchs, H.; Shehabi, A.; Ganeshalingam, M.
Servers’ field energy use remains poorly understood, given heterogeneous computing loads, configurable hardware and software, and operation over a wide range of management practices. This paper explores various characteristics of 1- and 2-socket volume servers that affect energy consumption, and quantifies the difference in power demand between higher-performing SPEC and ENERGY STAR servers and our best understanding of a typical server operating today. We first establish general characteristics of the U.S. installed base of volume servers from existing IDC data and the literature, before presenting information on server hardware configurations from data collection events at a major online retail website.more » We then compare cumulative distribution functions of server idle power across three separate datasets and explain the differences between them via examination of the hardware characteristics to which power draw is most sensitive. We find that idle server power demand is significantly higher than ENERGY STAR benchmarks and the industry-released energy use documented in SPEC, and that SPEC server configurations—and likely the associated power-scaling trends—are atypical of volume servers. Next, we examine recent trends in server power draw among high-performing servers across their full load range to consider how representative these trends are of all volume servers before inputting weighted average idle power load values into a recently published model of national server energy use. Finally, we present results from two surveys of IT managers (n=216) and IT vendors (n=178) that illustrate the prevalence of more-efficient equipment and operational practices in server rooms and closets; these findings highlight opportunities to improve the energy efficiency of the U.S. server stock.« less
NASA Astrophysics Data System (ADS)
Novak, Daniel M.; Biamonti, Davide; Gross, Jeremy; Milnes, Martin
2013-08-01
An innovative and visually appealing tool is presented for efficient all-vs-all conjunction analysis on a large catalogue of objects. The conjunction detection uses a nearest neighbour search algorithm, based on spatial binning and identification of pairs of objects in adjacent bins. This results in the fastest all vs all filtering the authors are aware of. The tool is constructed on a server-client architecture, where the server broadcasts to the client the conjunction data and ephemerides, while the client supports the user interface through a modern browser, without plug-in. In order to make the tool flexible and maintainable, Java software technologies were used on the server side, including Spring, Camel, ActiveMQ and CometD. The user interface and visualisation are based on the latest web technologies: HTML5, WebGL, THREE.js. Importance has been given on the ergonomics and visual appeal of the software. In fact certain design concepts have been borrowed from the gaming industry.
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.
Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron
2009-06-01
BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).
Interactive Machine Learning at Scale with CHISSL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arendt, Dustin L.; Grace, Emily A.; Volkova, Svitlana
We demonstrate CHISSL, a scalable client-server system for real-time interactive machine learning. Our system is capa- ble of incorporating user feedback incrementally and imme- diately without a structured or pre-defined prediction task. Computation is partitioned between a lightweight web-client and a heavyweight server. The server relies on representation learning and agglomerative clustering to learn a dendrogram, a hierarchical approximation of a representation space. The client uses only this dendrogram to incorporate user feedback into the model via transduction. Distances and predictions for each unlabeled instance are updated incrementally and deter- ministically, with O(n) space and time complexity. Our al- gorithmmore » is implemented in a functional prototype, designed to be easy to use by non-experts. The prototype organizes the large amounts of data into recommendations. This allows the user to interact with actual instances by dragging and drop- ping to provide feedback in an intuitive manner. We applied CHISSL to several domains including cyber, social media, and geo-temporal analysis.« less
Application of wireless networks-peer-to-peer information sharing
NASA Astrophysics Data System (ADS)
ellappan, Vijayan; chaki, suchismita; kumar, avn
2017-11-01
Peer to Peer communications and its applications have gotten to be ordinary construction modelling in the wired network environment. But then, they have not been successfully adjusted with the wireless environment. Unlike the traditional client-server framework, in a P2P framework, each node can play the role of client as well as server simultaneously and exchange data or information with others. We aim to design an application which can adapt to the wireless ad-hoc networks. Peer to Peer communication can help people to share their files (information, image, audio, video and so on) and communicate with each other without relying on a particular network infrastructure or limited data usage. Here there is a central server with the help of which, the peers will have the capability to get the information about the other peers in the network. Indeed, even without the Internet, devices have the potential to allow users to connect and communicate in a special way through short range remote protocols such Wi-Fi.
VoIP attacks detection engine based on neural network
NASA Astrophysics Data System (ADS)
Safarik, Jakub; Slachta, Jiri
2015-05-01
The security is crucial for any system nowadays, especially communications. One of the most successful protocols in the field of communication over IP networks is Session Initiation Protocol. It is an open-source project used by different kinds of applications, both open-source and proprietary. High penetration and text-based principle made SIP number one target in IP telephony infrastructure, so security of SIP server is essential. To keep up with hackers and to detect potential malicious attacks, security administrator needs to monitor and evaluate SIP traffic in the network. But monitoring and following evaluation could easily overwhelm the security administrator in networks, typically in networks with a number of SIP servers, users and logically or geographically separated networks. The proposed solution lies in automatic attack detection systems. The article covers detection of VoIP attacks through a distributed network of nodes. Then the gathered data analyze aggregation server with artificial neural network. Artificial neural network means multilayer perceptron network trained with a set of collected attacks. Attack data could also be preprocessed and verified with a self-organizing map. The source data is detected by distributed network of detection nodes. Each node contains a honeypot application and traffic monitoring mechanism. Aggregation of data from each node creates an input for neural networks. The automatic classification on a centralized server with low false positive detection reduce the cost of attack detection resources. The detection system uses modular design for easy deployment in final infrastructure. The centralized server collects and process detected traffic. It also maintains all detection nodes.
Mobile Assisted Security in Wireless Sensor Networks
2015-08-03
server from Google’s DNS, Chromecast and the content server does the 3-way TCP Handshake which is followed by Client Hello and Server Hello TLS messages...utilized TLS v1.2, except NTP servers and google’s DNS server. In the TLS v1.2, after handshake, client and server sends Client Hello and Server Hello ...Messages in order. In Client Hello messages, client offers a list of Cipher Suites that it supports. Each Cipher Suite defines the key exchange algorithm
Zheng, Ling-Ling; Xu, Wei-Lin; Liu, Shun; Sun, Wen-Ju; Li, Jun-Hao; Wu, Jie; Yang, Jian-Hua; Qu, Liang-Hu
2016-07-08
tRNA-derived small RNA fragments (tRFs) are one class of small non-coding RNAs derived from transfer RNAs (tRNAs). tRFs play important roles in cellular processes and are involved in multiple cancers. High-throughput small RNA (sRNA) sequencing experiments can detect all the cellular expressed sRNAs, including tRFs. However, distinguishing genuine tRFs from RNA fragments generated by random degradation remains a major challenge. In this study, we developed an integrated web-based computing system, tRF2Cancer, to accurately identify tRFs from sRNA deep-sequencing data and evaluate their expression in multiple cancers. The binomial test was introduced to evaluate whether reads from a small RNA-seq data set represent tRFs or degraded fragments. A classification method was then used to annotate the types of tRFs based on their sites of origin in pre-tRNA or mature tRNA. We applied the pipeline to analyze 10 991 data sets from 32 types of cancers and identified thousands of expressed tRFs. A tool called 'tRFinCancer' was developed to facilitate the users to inspect the expression of tRFs across different types of cancers. Another tool called 'tRFBrowser' shows both the sites of origin and the distribution of chemical modification sites in tRFs on their source tRNA. The tRF2Cancer web server is available at http://rna.sysu.edu.cn/tRFfinder/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
DspaceOgre 3D Graphics Visualization Tool
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.
2011-01-01
This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, X; Liu, L; Xing, L
Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less
Gundersen, Gregory W; Jones, Matthew R; Rouillard, Andrew D; Kou, Yan; Monteiro, Caroline D; Feldmann, Axel S; Hu, Kevin S; Ma'ayan, Avi
2015-09-15
Identification of differentially expressed genes is an important step in extracting knowledge from gene expression profiling studies. The raw expression data from microarray and other high-throughput technologies is deposited into the Gene Expression Omnibus (GEO) and served as Simple Omnibus Format in Text (SOFT) files. However, to extract and analyze differentially expressed genes from GEO requires significant computational skills. Here we introduce GEO2Enrichr, a browser extension for extracting differentially expressed gene sets from GEO and analyzing those sets with Enrichr, an independent gene set enrichment analysis tool containing over 70 000 annotated gene sets organized into 75 gene-set libraries. GEO2Enrichr adds JavaScript code to GEO web-pages; this code scrapes user selected accession numbers and metadata, and then, with one click, users can submit this information to a web-server application that downloads the SOFT files, parses, cleans and normalizes the data, identifies the differentially expressed genes, and then pipes the resulting gene lists to Enrichr for downstream functional analysis. GEO2Enrichr opens a new avenue for adding functionality to major bioinformatics resources such GEO by integrating tools and resources without the need for a plug-in architecture. Importantly, GEO2Enrichr helps researchers to quickly explore hypotheses with little technical overhead, lowering the barrier of entry for biologists by automating data processing steps needed for knowledge extraction from the major repository GEO. GEO2Enrichr is an open source tool, freely available for installation as browser extensions at the Chrome Web Store and FireFox Add-ons. Documentation and a browser independent web application can be found at http://amp.pharm.mssm.edu/g2e/. avi.maayan@mssm.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
XMM-Newton Remote Interface to Science Analysis Software: First Public Version
NASA Astrophysics Data System (ADS)
Ibarra, A.; Gabriel, C.
2011-07-01
We present the first public beta release of the XMM-Newton Remote Interface to Science Analysis (RISA) software, available through the official XMM-Newton web pages. In a nutshell, RISA is a web based application that encapsulates the XMM-Newton data analysis software. The client identifies observations and creates XMM-Newton workflows. The server processes the client request, creates job templates and sends the jobs to a computer. RISA has been designed to help, at the same time, non-expert and professional XMM-Newton users. Thanks to the predefined threads, non-expert users can easily produce light curves and spectra. And on the other hand, expert user can use the full parameter interface to tune their own analysis. In both cases, the VO compliant client/server design frees the users from having to install any specific software to analyze XMM-Newton data.
OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software
NASA Astrophysics Data System (ADS)
Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.
2006-12-01
OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for a SOAP interface, the new server includes support for the THREDDS cataloging protocol. THREDDS is tightly integrated into the Front-End of Server4. The Server4 Front-End can make full use of the advanced THREDDS features such as attribute specification and inheritance, custom catalogs which segue into automatically generated catalogs as well as providing a default behavior which requires almost no catalog configuration.
An extensible and lightweight architecture for adaptive server applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorton, Ian; Liu, Yan; Trivedi, Nihar
2008-07-10
Server applications augmented with behavioral adaptation logic can react to environmental changes, creating self-managing server applications with improved quality of service at runtime. However, developing adaptive server applications is challenging due to the complexity of the underlying server technologies and highly dynamic application environments. This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications. ASF provides a clear separation between the implementation of adaptive behavior and the business logic of the server application. This means a server application can be extended with programmable adaptive features through the definitionmore » and implementation of control components defined in ASF. Furthermore, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the erformance gains possible by adaptive behavior and the low overhead introduced by ASF.« less
eCX: A Secure Infrastructure for E-Course Delivery.
ERIC Educational Resources Information Center
Yau, Joe C. K; Hui, Lucas C. K.; Cheung, Bruce; Yiu, S. M.
2003-01-01
Presents a mechanism, the Secure e-Course eXchange (eCX) designed to protect learning material from unauthorized dissemination, and shows how this mechanism can be integrated in the operation model of online learning course providers. The design of eCX is flexible to fit two operating models, the Institutional Server Model and the Corporate Server…
Energy Efficiency in Small Server Rooms: Field Surveys and Findings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Iris; Greenberg, Steve; Mahdavi, Roozbeh
Fifty-seven percent of US servers are housed in server closets, server rooms, and localized data centers, in what are commonly referred to as small server rooms, which comprise 99percent of all server spaces in the US. While many mid-tier and enterprise-class data centers are owned by large corporations that consider energy efficiency a goal to minimize business operating costs, small server rooms typically are not similarly motivated. They are characterized by decentralized ownership and management and come in many configurations, which creates a unique set of efficiency challenges. To develop energy efficiency strategies for these spaces, we surveyed 30 smallmore » server rooms across eight institutions, and selected four of them for detailed assessments. The four rooms had Power Usage Effectiveness (PUE) values ranging from 1.5 to 2.1. Energy saving opportunities ranged from no- to low-cost measures such as raising cooling set points and better airflow management, to more involved but cost-effective measures including server consolidation and virtualization, and dedicated cooling with economizers. We found that inefficiencies mainly resulted from organizational rather than technical issues. Because of the inherent space and resource limitations, the most effective measure is to operate servers through energy-efficient cloud-based services or well-managed larger data centers, rather than server rooms. Backup power requirement, and IT and cooling efficiency should be evaluated to minimize energy waste in the server space. Utility programs are instrumental in raising awareness and spreading technical knowledge on server operation, and the implementation of energy efficiency measures in small server rooms.« less
A genome-wide 20 K citrus microarray for gene expression analysis
Martinez-Godoy, M Angeles; Mauri, Nuria; Juarez, Jose; Marques, M Carmen; Santiago, Julia; Forment, Javier; Gadea, Jose
2008-01-01
Background Understanding of genetic elements that contribute to key aspects of citrus biology will impact future improvements in this economically important crop. Global gene expression analysis demands microarray platforms with a high genome coverage. In the last years, genome-wide EST collections have been generated in citrus, opening the possibility to create new tools for functional genomics in this crop plant. Results We have designed and constructed a publicly available genome-wide cDNA microarray that include 21,081 putative unigenes of citrus. As a functional companion to the microarray, a web-browsable database [1] was created and populated with information about the unigenes represented in the microarray, including cDNA libraries, isolated clones, raw and processed nucleotide and protein sequences, and results of all the structural and functional annotation of the unigenes, like general description, BLAST hits, putative Arabidopsis orthologs, microsatellites, putative SNPs, GO classification and PFAM domains. We have performed a Gene Ontology comparison with the full set of Arabidopsis proteins to estimate the genome coverage of the microarray. We have also performed microarray hybridizations to check its usability. Conclusion This new cDNA microarray replaces the first 7K microarray generated two years ago and allows gene expression analysis at a more global scale. We have followed a rational design to minimize cross-hybridization while maintaining its utility for different citrus species. Furthermore, we also provide access to a website with full structural and functional annotation of the unigenes represented in the microarray, along with the ability to use this site to directly perform gene expression analysis using standard tools at different publicly available servers. Furthermore, we show how this microarray offers a good representation of the citrus genome and present the usefulness of this genomic tool for global studies in citrus by using it to catalogue genes expressed in citrus globular embryos. PMID:18598343
Predictor - Predictive Reaction Design via Informatics, Computation and Theories of Reactivity
2017-10-10
into more complex and valuable molecules, but are limited by: 1. The extensive time it takes to design and optimize a synthesis 2. Multi-step...system. As it is fully compatible to the industry standard SQL, designing a server- based system at a later time will be trivial. Producing a JAVA front...Report: PREDICTOR - Predictive REaction Design via Informatics, Computation and Theories of Reactivity The goal of this program was to create a cyber
Web Application Design Using Server-Side JavaScript
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, J.; Simons, R.
1999-02-01
This document describes the application design philosophy for the Comprehensive Nuclear Test Ban Treaty Research & Development Web Site. This design incorporates object-oriented techniques to produce a flexible and maintainable system of applications that support the web site. These techniques will be discussed at length along with the issues they address. The overall structure of the applications and their relationships with one another will also be described. The current problems and future design changes will be discussed as well.
NASA Astrophysics Data System (ADS)
Stepanov, Sergey
2013-03-01
X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.
Effect of video server topology on contingency capacity requirements
NASA Astrophysics Data System (ADS)
Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.
1996-03-01
Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.
Development of mobile platform integrated with existing electronic medical records.
Kim, YoungAh; Kim, Sung Soo; Kang, Simon; Kim, Kyungduk; Kim, Jun
2014-07-01
This paper describes a mobile Electronic Medical Record (EMR) platform designed to manage and utilize the existing EMR and mobile application with optimized resources. We structured the mEMR to reuse services of retrieval and storage in mobile app environments that have already proven to have no problem working with EMRs. A new mobile architecture-based mobile solution was developed in four steps: the construction of a server and its architecture; screen layout and storyboard making; screen user interface design and development; and a pilot test and step-by-step deployment. This mobile architecture consists of two parts, the server-side area and the client-side area. In the server-side area, it performs the roles of service management for EMR and documents and for information exchange. Furthermore, it performs menu allocation depending on user permission and automatic clinical document architecture document conversion. Currently, Severance Hospital operates an iOS-compatible mobile solution based on this mobile architecture and provides stable service without additional resources, dealing with dynamic changes of EMR templates. The proposed mobile solution should go hand in hand with the existing EMR system, and it can be a cost-effective solution if a quality EMR system is operated steadily with this solution. Thus, we expect this example to be shared with hospitals that currently plan to deploy mobile solutions.
Development of Mobile Platform Integrated with Existing Electronic Medical Records
Kim, YoungAh; Kang, Simon; Kim, Kyungduk; Kim, Jun
2014-01-01
Objectives This paper describes a mobile Electronic Medical Record (EMR) platform designed to manage and utilize the existing EMR and mobile application with optimized resources. Methods We structured the mEMR to reuse services of retrieval and storage in mobile app environments that have already proven to have no problem working with EMRs. A new mobile architecture-based mobile solution was developed in four steps: the construction of a server and its architecture; screen layout and storyboard making; screen user interface design and development; and a pilot test and step-by-step deployment. This mobile architecture consists of two parts, the server-side area and the client-side area. In the server-side area, it performs the roles of service management for EMR and documents and for information exchange. Furthermore, it performs menu allocation depending on user permission and automatic clinical document architecture document conversion. Results Currently, Severance Hospital operates an iOS-compatible mobile solution based on this mobile architecture and provides stable service without additional resources, dealing with dynamic changes of EMR templates. Conclusions The proposed mobile solution should go hand in hand with the existing EMR system, and it can be a cost-effective solution if a quality EMR system is operated steadily with this solution. Thus, we expect this example to be shared with hospitals that currently plan to deploy mobile solutions. PMID:25152837
Li, Jun; Roebuck, Paul; Grünewald, Stefan; Liang, Han
2012-07-01
An important task in biomedical research is identifying biomarkers that correlate with patient clinical data, and these biomarkers then provide a critical foundation for the diagnosis and treatment of disease. Conventionally, such an analysis is based on individual genes, but the results are often noisy and difficult to interpret. Using a biological network as the searching platform, network-based biomarkers are expected to be more robust and provide deep insights into the molecular mechanisms of disease. We have developed a novel bioinformatics web server for identifying network-based biomarkers that most correlate with patient survival data, SurvNet. The web server takes three input files: one biological network file, representing a gene regulatory or protein interaction network; one molecular profiling file, containing any type of gene- or protein-centred high-throughput biological data (e.g. microarray expression data or DNA methylation data); and one patient survival data file (e.g. patients' progression-free survival data). Given user-defined parameters, SurvNet will automatically search for subnetworks that most correlate with the observed patient survival data. As the output, SurvNet will generate a list of network biomarkers and display them through a user-friendly interface. SurvNet can be accessed at http://bioinformatics.mdanderson.org/main/SurvNet.
EXP-PAC: providing comparative analysis and storage of next generation gene expression data.
Church, Philip C; Goscinski, Andrzej; Lefèvre, Christophe
2012-07-01
Microarrays and more recently RNA sequencing has led to an increase in available gene expression data. How to manage and store this data is becoming a key issue. In response we have developed EXP-PAC, a web based software package for storage, management and analysis of gene expression and sequence data. Unique to this package is SQL based querying of gene expression data sets, distributed normalization of raw gene expression data and analysis of gene expression data across experiments and species. This package has been populated with lactation data in the international milk genomic consortium web portal (http://milkgenomics.org/). Source code is also available which can be hosted on a Windows, Linux or Mac APACHE server connected to a private or public network (http://mamsap.it.deakin.edu.au/~pcc/Release/EXP_PAC.html). Copyright © 2012 Elsevier Inc. All rights reserved.
Internet-based distributed collaborative environment for engineering education and design
NASA Astrophysics Data System (ADS)
Sun, Qiuli
2001-07-01
This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.
Zafer, Maryam; Liu, Shiyuan; Katz, Craig L
2018-04-28
Harmful alcohol use encompasses a spectrum of habits, including heavy episodic drinking (HED) which increases the risk of acute alcohol-related harms. The prevalence of HED in Saint Vincent and the Grenadines (SVG) is 5.7% among the overall population aged 15 years and older and 10.2% among drinkers. Responsible Beverage Service interventions train alcohol servers to limit levels of intoxication attained by customers and decrease acute alcohol-related harms. The objectives of this study were to determine bar tenders' and rum shopkeepers' knowledge of and attitudes toward problem drinking and willingness to participate in server training. Researchers used convenience and purposive sampling to recruit 30 participants from Barraouile, Kingstown, and Calliaqua to participate in semi-structured interviews designed to explore study objectives. Results and conclusions were derived from grounded theory analysis. Heavy episodic drinking is common but not stigmatized. Heavy drinking is considered a "problem" if the customer attains a level of disinhibition causing drunken and disruptive or injurious behavior. Bartenders and rum shopkeepers reported intervening with visibly intoxicated patrons and encouraging cessation of continued alcohol consumption. Participants cited economic incentives, prevention of alcohol-related harms, and personal morals as motivators to prevent drunkenness. Respondents acknowledged that encouraging responsible drinking was a legitimate part of their role and were favorable to server training. However, there were mixed opinions about the intervention's perceived efficacy given absent community-wide standards on preventing intoxication and limitations of existing alcohol policy. Given respondents' motivation and lack of standardized alcohol server training in SVG, mandated server training can be an effective strategy when promoted as one piece of a multi-component alcohol policy.
CINTEX: International Interoperability Extensions to EOSDIS
NASA Technical Reports Server (NTRS)
Graves, Sara J.
1997-01-01
A large part of the research under this cooperative agreement involved working with representatives of the DLR, NASDA, EDC, and NOAA-SAA data centers to propose a set of enhancements and additions to the EOSDIS Version 0 Information Management System (V0 IMS) Client/Server Message Protocol. Helen Conover of ITSL led this effort to provide for an additional geographic search specification (WRS Path/Row), data set- and data center-specific search criteria, search by granule ID, specification of data granule subsetting requests, data set-based ordering, and the addition of URLs to result messages. The V0 IMS Server Cookbook is an evolving document, providing resources and information to data centers setting up a VO IMS Server. Under this Cooperative Agreement, Helen Conover revised, reorganized, and expanded this document, and converted it to HTML. Ms. Conover has also worked extensively with the IRE RAS data center, CPSSI, in Russia. She served as the primary IMS contact for IRE-CPSSI and as IRE-CPSSI's liaison to other members of IMS and Web Gateway (WG) development teams. Her documentation of IMS problems in the IRE environment (Sun servers and low network bandwidth) led to a general restructuring of the V0 IMS Client message polling system. to the benefit of all IMS participants. In addition to the IMS server software and documentation. which are generally available to CINTEX sites, Ms. Conover also provided database design documentation and consulting, order tracking software, and hands-on testing and debug assistance to IRE. In the final pre-operational phase of IRE-CPSSI development, she also supplied information on configuration management, including ideas and processes in place at the Global Hydrology Resource Center (GHRC), an EOSDIS data center operated by ITSL.
Iwasaki, Wataru; Yamamoto, Yasunori; Takagi, Toshihisa
2010-12-13
In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.
Takagi, Toshihisa
2010-01-01
In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the “tsunami” of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom. PMID:21179453
Home medical monitoring network based on embedded technology
NASA Astrophysics Data System (ADS)
Liu, Guozhong; Deng, Wenyi; Yan, Bixi; Lv, Naiguang
2006-11-01
Remote medical monitoring network for long-term monitoring of physiological variables would be helpful for recovery of patients as people are monitored at more comfortable conditions. Furthermore, long-term monitoring would be beneficial to investigate slowly developing deterioration in wellness status of a subject and provide medical treatment as soon as possible. The home monitor runs on an embedded microcomputer Rabbit3000 and interfaces with different medical monitoring module through serial ports. The network based on asymmetric digital subscriber line (ADSL) or local area network (LAN) is established and a client - server model, each embedded home medical monitor is client and the monitoring center is the server, is applied to the system design. The client is able to provide its information to the server when client's request of connection to the server is permitted. The monitoring center focuses on the management of the communications, the acquisition of medical data, and the visualization and analysis of the data, etc. Diagnosing model of sleep apnea syndrome is built basing on ECG, heart rate, respiration wave, blood pressure, oxygen saturation, air temperature of mouth cavity or nasal cavity, so sleep status can be analyzed by physiological data acquired as people in sleep. Remote medical monitoring network based on embedded micro Internetworking technology have advantages of lower price, convenience and feasibility, which have been tested by the prototype.
Yu, Jinchao; Vavrusa, Marek; Andreani, Jessica; Rey, Julien; Tufféry, Pierre; Guerois, Raphaël
2016-01-01
The structural modeling of protein–protein interactions is key in understanding how cell machineries cross-talk with each other. Molecular docking simulations provide efficient means to explore how two unbound protein structures interact. InterEvDock is a server for protein docking based on a free rigid-body docking strategy. A systematic rigid-body docking search is performed using the FRODOCK program and the resulting models are re-scored with InterEvScore and SOAP-PP statistical potentials. The InterEvScore potential was specifically designed to integrate co-evolutionary information in the docking process. InterEvDock server is thus particularly well suited in case homologous sequences are available for both binding partners. The server returns 10 structures of the most likely consensus models together with 10 predicted residues most likely involved in the interface. In 91% of all complexes tested in the benchmark, at least one residue out of the 10 predicted is involved in the interface, providing useful guidelines for mutagenesis. InterEvDock is able to identify a correct model among the top10 models for 49% of the rigid-body cases with evolutionary information, making it a unique and efficient tool to explore structural interactomes under an evolutionary perspective. The InterEvDock web interface is available at http://bioserv.rpbs.univ-paris-diderot.fr/services/InterEvDock/. PMID:27131368
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, R. K.
2007-04-04
A perl module designed to read and parse the voluminous set of event or accounting log files produced by a Portable Batch System (PBS) server. This module can filter on date-time and/or record type. The data can be returned in a variety of formats.
Instruction manual for operating the Sensys System for temporary traffic counts
DOT National Transportation Integrated Search
2010-01-01
This instruction manual provides information and the procedures for using the Sensys System, which was initially designed to operate in a server controlled network, for temporary traffic counts. The instructions will allow the user to fully understan...
Saadi, Mahdiye; Karkhah, Ahmad; Nouri, Hamid Reza
2017-07-01
Current investigations have demonstrated that a multi-epitope peptide vaccine targeting multiple antigens could be considered as an ideal approach for prevention and treatment of brucellosis. According to the latest findings, the most effective immunogenic antigens of brucella to induce immune responses are included Omp31, BP26, BLS, DnaK and L7-L12. Therefore, in the present study, an in silico approach was used to design a novel multi-epitope vaccine to elicit a desirable immune response against brucellosis. First, five novel T-cell epitopes were selected from Omp31, BP26, BLS, DnaK and L7-L12 proteins using different servers. In addition, helper epitopes selected from Tetanus toxin fragment C (TTFrC) were applied to induce CD4+ helper T lymphocytes (HTLs) responses. Selected epitopes were fused together by GPGPG linkers to facilitate the immune processing and epitope presentation. Moreover, cholera toxin B (CTB) was linked to N terminal of vaccine construct as an adjuvant by using EAAAK linker. A multi-epitope vaccine was designed based on predicted epitopes which was 377 amino acid residues in length. Then, the physico-chemical properties, secondary and tertiary structures, stability, intrinsic protein disorder, solubility and allergenicity of this multi-epitope vaccine were assessed using immunoinformatics tools and servers. Based on obtained results, a soluble, and non-allergic protein with 40.59kDa molecular weight was constructed. Expasy ProtParam classified this chimeric protein as a stable protein and also 89.8% residues of constructed vaccine were located in favored regions of the Ramachandran plot. Furthermore, this multi-epitope peptide vaccine was able to strongly induce T cell and B-cell mediated immune responses. In conclusion, immunoinformatics analysis indicated that this multi-epitope peptide vaccine can be effectively expressed and potentially be used for prophylactic or therapeutic usages against brucellosis. Copyright © 2017 Elsevier B.V. All rights reserved.
AMON: Transition to real-time operations
NASA Astrophysics Data System (ADS)
Cowen, D. F.; Keivani, A.; Tešić, G.
2016-04-01
The Astrophysical Multimessenger Observatory Network (AMON) will link the world's leading high-energy neutrino, cosmic-ray, gamma-ray and gravitational wave observatories by performing real-time coincidence searches for multimessenger sources from observatories' subthreshold data streams. The resulting coincidences will be distributed to interested parties in the form of electronic alerts for real-time follow-up observation. We will present the science case, design elements, current and projected partner observatories, status of the AMON project, and an initial AMON-enabled analysis. The prototype of the AMON server has been online since August 2014 and processing archival data. Currently, we are deploying new high-uptime servers and will be ready to start issuing alerts as early as winter 2015/16.
Web-Accessible Scientific Workflow System for Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roelof Versteeg; Roelof Versteeg; Trevor Rowe
2006-03-01
We describe the design and implementation of a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic Javascript and HTML/CSS) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This environment allows for reproducible, transparent result generation by a diverse user base. It has been implemented for several monitoringmore » systems with different degrees of complexity.« less
Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.
2009-01-01
The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.
Web-based segmentation and display of three-dimensional radiologic image data.
Silverstein, J; Rubenstein, J; Millman, A; Panko, W
1998-01-01
In many clinical circumstances, viewing sequential radiological image data as three-dimensional models is proving beneficial. However, designing customized computer-generated radiological models is beyond the scope of most physicians, due to specialized hardware and software requirements. We have created a simple method for Internet users to remotely construct and locally display three-dimensional radiological models using only a standard web browser. Rapid model construction is achieved by distributing the hardware intensive steps to a remote server. Once created, the model is automatically displayed on the requesting browser and is accessible to multiple geographically distributed users. Implementation of our server software on large scale systems could be of great service to the worldwide medical community.
NASA Astrophysics Data System (ADS)
Rezvani, Mohammad Hossein; Analoui, Morteza
2010-11-01
We have designed a competitive economical mechanism for application level multicast in which a number of independent services are provided to the end-users by a number of origin servers. Each offered service can be thought of as a commodity and the origin servers and the users who relay the service to their downstream nodes can thus be thought of as producers of the economy. Also, the end-users can be viewed as consumers of the economy. The proposed mechanism regulates the price of each service in such a way that general equilibrium holds. So, all allocations will be Pareto optimal in the sense that the social welfare of the users is maximized.
A web-based solution for 3D medical image visualization
NASA Astrophysics Data System (ADS)
Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo
2015-03-01
In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.
Katzman, G L; Morris, D; Lauman, J; Cochella, C; Goede, P; Harnsberger, H R
2001-06-01
To foster a community supported evaluation processes for open-source digital teaching file (DTF) development and maintenance. The mechanisms used to support this process will include standard web browsers, web servers, forum software, and custom additions to the forum software to potentially enable a mediated voting protocol. The web server will also serve as a focal point for beta and release software distribution, which is the desired end-goal of this process. We foresee that www.mdtf.org will provide for widespread distribution of open source DTF software that will include function and interface design decisions from community participation on the website forums.
Development of an E-mail Application Seemit and its Utilization in an Information Literacy Course
NASA Astrophysics Data System (ADS)
Kita, Toshihiro; Miyazaki, Makoto; Nakano, Hiroshi; Sugitani, Kenichi; Akiyama, Hidenori
We have developed a simple e-mail application named Seemit which is designed for being used in information literacy courses. It has necessary and sufficient functionality of an e-mail application, and it has been developed for the purpose of learning basic operations and mechanisms of e-mail transfer easily. It is equipped with the function to automatically set the configuration of user's SMTP/POP servers and e-mail address, etc. The process of transferring e-mail via SMTP and POP can be demonstrated step by step showing actual messages passed during the client-server interaction. We have utilized Seemit in a university-wide information literacy course which holds about 1800 students.
Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario
2004-01-01
This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.
Kim, Hyerin; Kang, NaNa; An, KyuHyeon; Koo, JaeHyung; Kim, Min-Soo
2016-01-01
Design of high-quality primers for multiple target sequences is essential for qPCR experiments, but is challenging due to the need to consider both homology tests on off-target sequences and the same stringent filtering constraints on the primers. Existing web servers for primer design have major drawbacks, including requiring the use of BLAST-like tools for homology tests, lack of support for ranking of primers, TaqMan probes and simultaneous design of primers against multiple targets. Due to the large-scale computational overhead, the few web servers supporting homology tests use heuristic approaches or perform homology tests within a limited scope. Here, we describe the MRPrimerW, which performs complete homology testing, supports batch design of primers for multi-target qPCR experiments, supports design of TaqMan probes and ranks the resulting primers to return the top-1 best primers to the user. To ensure high accuracy, we adopted the core algorithm of a previously reported MapReduce-based method, MRPrimer, but completely redesigned it to allow users to receive query results quickly in a web interface, without requiring a MapReduce cluster or a long computation. MRPrimerW provides primer design services and a complete set of 341 963 135 in silico validated primers covering 99% of human and mouse genes. Free access: http://MRPrimerW.com. PMID:27154272
RPPAML/RIMS: A metadata format and an information management system for reverse phase protein arrays
Stanislaus, Romesh; Carey, Mark; Deus, Helena F; Coombes, Kevin; Hennessy, Bryan T; Mills, Gordon B; Almeida, Jonas S
2008-01-01
Background Reverse Phase Protein Arrays (RPPA) are convenient assay platforms to investigate the presence of biomarkers in tissue lysates. As with other high-throughput technologies, substantial amounts of analytical data are generated. Over 1000 samples may be printed on a single nitrocellulose slide. Up to 100 different proteins may be assessed using immunoperoxidase or immunoflorescence techniques in order to determine relative amounts of protein expression in the samples of interest. Results In this report an RPPA Information Management System (RIMS) is described and made available with open source software. In order to implement the proposed system, we propose a metadata format known as reverse phase protein array markup language (RPPAML). RPPAML would enable researchers to describe, document and disseminate RPPA data. The complexity of the data structure needed to describe the results and the graphic tools necessary to visualize them require a software deployment distributed between a client and a server application. This was achieved without sacrificing interoperability between individual deployments through the use of an open source semantic database, S3DB. This data service backbone is available to multiple client side applications that can also access other server side deployments. The RIMS platform was designed to interoperate with other data analysis and data visualization tools such as Cytoscape. Conclusion The proposed RPPAML data format hopes to standardize RPPA data. Standardization of data would result in diverse client applications being able to operate on the same set of data. Additionally, having data in a standard format would enable data dissemination and data analysis. PMID:19102773
Arneson, Douglas; Bhattacharya, Anindya; Shu, Le; Mäkinen, Ville-Petteri; Yang, Xia
2016-09-09
Human diseases are commonly the result of multidimensional changes at molecular, cellular, and systemic levels. Recent advances in genomic technologies have enabled an outpour of omics datasets that capture these changes. However, separate analyses of these various data only provide fragmented understanding and do not capture the holistic view of disease mechanisms. To meet the urgent needs for tools that effectively integrate multiple types of omics data to derive biological insights, we have developed Mergeomics, a computational pipeline that integrates multidimensional disease association data with functional genomics and molecular networks to retrieve biological pathways, gene networks, and central regulators critical for disease development. To make the Mergeomics pipeline available to a wider research community, we have implemented an online, user-friendly web server ( http://mergeomics. idre.ucla.edu/ ). The web server features a modular implementation of the Mergeomics pipeline with detailed tutorials. Additionally, it provides curated genomic resources including tissue-specific expression quantitative trait loci, ENCODE functional annotations, biological pathways, and molecular networks, and offers interactive visualization of analytical results. Multiple computational tools including Marker Dependency Filtering (MDF), Marker Set Enrichment Analysis (MSEA), Meta-MSEA, and Weighted Key Driver Analysis (wKDA) can be used separately or in flexible combinations. User-defined summary-level genomic association datasets (e.g., genetic, transcriptomic, epigenomic) related to a particular disease or phenotype can be uploaded and computed real-time to yield biologically interpretable results, which can be viewed online and downloaded for later use. Our Mergeomics web server offers researchers flexible and user-friendly tools to facilitate integration of multidimensional data into holistic views of disease mechanisms in the form of tissue-specific key regulators, biological pathways, and gene networks.
Jupp, Simon; Burdett, Tony; Welter, Danielle; Sarntivijai, Sirarat; Parkinson, Helen; Malone, James
2016-01-01
Authoring bio-ontologies is a task that has traditionally been undertaken by skilled experts trained in understanding complex languages such as the Web Ontology Language (OWL), in tools designed for such experts. As requests for new terms are made, the need for expert ontologists represents a bottleneck in the development process. Furthermore, the ability to rigorously enforce ontology design patterns in large, collaboratively developed ontologies is difficult with existing ontology authoring software. We present Webulous, an application suite for supporting ontology creation by design patterns. Webulous provides infrastructure to specify templates for populating ontology design patterns that get transformed into OWL assertions in a target ontology. Webulous provides programmatic access to the template server and a client application has been developed for Google Sheets that allows templates to be loaded, populated and resubmitted to the Webulous server for processing. The development and delivery of ontologies to the community requires software support that goes beyond the ontology editor. Building ontologies by design patterns and providing simple mechanisms for the addition of new content helps reduce the overall cost and effort required to develop an ontology. The Webulous system provides support for this process and is used as part of the development of several ontologies at the European Bioinformatics Institute.
Design of smart home gateway based on Wi-Fi and ZigBee
NASA Astrophysics Data System (ADS)
Li, Yang
2018-04-01
With the increasing demand for home lifestyle, the traditional smart home products have been unable to meet the needs of users. Aim at the complex wiring, high cost and difficult operation problems of traditional smart home system, this paper designs a home gateway for smart home system based on Wi-Fi and ZigBee. This paper first gives a smart home system architecture base on cloud server, Wi-Fi and ZigBee. This architecture enables users to access the smart home system remotely from Internet through the cloud server or through Wi-Fi at home. It also offers the flexibility and low cost of ZigBee wireless networking for home equipment. This paper analyzes the functional requirements of the home gateway, and designs a modular hardware architecture based on the RT5350 wireless gateway module and the CC2530 ZigBee coordinator module. Also designs the software of the home gateway, including the gateway master program and the ZigBee coordinator program. Finally, the smart home system and home gateway are tested in two kinds of network environments, internal network and external network. The test results show that the designed home gateway can meet the requirements, support remote and local access, support multi-user, support information security technology, and can timely report equipment status information.
Plaisier, Christopher L; Bare, J Christopher; Baliga, Nitin S
2011-07-01
Transcriptome profiling studies have produced staggering numbers of gene co-expression signatures for a variety of biological systems. A significant fraction of these signatures will be partially or fully explained by miRNA-mediated targeted transcript degradation. miRvestigator takes as input lists of co-expressed genes from Caenorhabditis elegans, Drosophila melanogaster, G. gallus, Homo sapiens, Mus musculus or Rattus norvegicus and identifies the specific miRNAs that are likely to bind to 3' un-translated region (UTR) sequences to mediate the observed co-regulation. The novelty of our approach is the miRvestigator hidden Markov model (HMM) algorithm which systematically computes a similarity P-value for each unique miRNA seed sequence from the miRNA database miRBase to an overrepresented sequence motif identified within the 3'-UTR of the query genes. We have made this miRNA discovery tool accessible to the community by integrating our HMM algorithm with a proven algorithm for de novo discovery of miRNA seed sequences and wrapping these algorithms into a user-friendly interface. Additionally, the miRvestigator web server also produces a list of putative miRNA binding sites within 3'-UTRs of the query transcripts to facilitate the design of validation experiments. The miRvestigator is freely available at http://mirvestigator.systemsbiology.net.
MyShake - A smartphone app to detect earthquake
NASA Astrophysics Data System (ADS)
Kong, Q.; Allen, R. M.; Schreier, L.; Kwon, Y. W.
2015-12-01
We designed an android app that harnesses the accelerometers in personal smartphones to record earthquake-shaking data for research, hazard information and warnings. The app has the function to distinguish earthquake shakings from daily human activities based on the different patterns behind the movements. It also can be triggered by the traditional earthquake early warning (EEW) system to record for a certain amount of time to collect earthquake data. When the app is triggered by the earthquake-like movements, it sends the trigger information back to our server which contains time and location of the trigger, at the same time, it stores the waveform data on local phone first, and upload to our server later. Trigger information from multiple phones will be processed in real time on the server to find the coherent signal to confirm the earthquakes. Therefore, the app provides the basis to form a smartphone seismic network that can detect earthquake and even provide warnings. A planned public roll-out of MyShake could collect millions of seismic recordings for large earthquakes in many regions around the world.
A performance analysis of advanced I/O architectures for PC-based network file servers
NASA Astrophysics Data System (ADS)
Huynh, K. D.; Khoshgoftaar, T. M.
1994-12-01
In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.
Video personalization for usage environment
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.
2002-07-01
A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.
Tang, Hua; Chen, Wei; Lin, Hao
2016-04-01
Immunoglobulins, also called antibodies, are a group of cell surface proteins which are produced by the immune system in response to the presence of a foreign substance (called antigen). They play key roles in many medical, diagnostic and biotechnological applications. Correct identification of immunoglobulins is crucial to the comprehension of humoral immune function. With the avalanche of protein sequences identified in postgenomic age, it is highly desirable to develop computational methods to timely identify immunoglobulins. In view of this, we designed a predictor called "IGPred" by formulating protein sequences with the pseudo amino acid composition into which nine physiochemical properties of amino acids were incorporated. Jackknife cross-validated results showed that 96.3% of immunoglobulins and 97.5% of non-immunoglobulins can be correctly predicted, indicating that IGPred holds very high potential to become a useful tool for antibody analysis. For the convenience of most experimental scientists, a web-server for IGPred was established at http://lin.uestc.edu.cn/server/IGPred. We believe that the web-server will become a powerful tool to study immunoglobulins and to guide related experimental validations.
IRaPPA: Information retrieval based integration of biophysical models for protein assembly selection
Moal, Iain H.; Barradas-Bautista, Didier; Jiménez-García, Brian; Torchala, Mieczyslaw; van der Velde, Arjan; Vreven, Thom; Weng, Zhiping; Bates, Paul A.; Fernández-Recio, Juan
2018-01-01
Motivation In order to function, proteins frequently bind to one another and form 3D assemblies. Knowledge of the atomic details of these structures helps our understanding of how proteins work together, how mutations can lead to disease, and facilitates the designing of drugs which prevent or mimic the interaction. Results Atomic modeling of protein-protein interactions requires the selection of near-native structures from a set of docked poses based on their calculable properties. By considering this as an information retrieval problem, we have adapted methods developed for Internet search ranking and electoral voting into IRaPPA, a pipeline integrating biophysical properties. The approach enhances the identification of near-native structures when applied to four docking methods, resulting in a near-native appearing in the top 10 solutions for up to 50% of complexes benchmarked, and up to 70% in the top 100. Availability IRaPPA has been implemented in the SwarmDock server (http://bmm.crick.ac.uk/~SwarmDock/), pyDock server (http://life.bsc.es/pid/pydockrescoring/) and ZDOCK server (http://zdock.umassmed.edu/), with code available on request. PMID:28200016
SeMPI: a genome-based secondary metabolite prediction and identification web server.
Zierep, Paul F; Padilla, Natàlia; Yonchev, Dimitar G; Telukunta, Kiran K; Klementz, Dennis; Günther, Stefan
2017-07-03
The secondary metabolism of bacteria, fungi and plants yields a vast number of bioactive substances. The constantly increasing amount of published genomic data provides the opportunity for an efficient identification of gene clusters by genome mining. Conversely, for many natural products with resolved structures, the encoding gene clusters have not been identified yet. Even though genome mining tools have become significantly more efficient in the identification of biosynthetic gene clusters, structural elucidation of the actual secondary metabolite is still challenging, especially due to as yet unpredictable post-modifications. Here, we introduce SeMPI, a web server providing a prediction and identification pipeline for natural products synthesized by polyketide synthases of type I modular. In order to limit the possible structures of PKS products and to include putative tailoring reactions, a structural comparison with annotated natural products was introduced. Furthermore, a benchmark was designed based on 40 gene clusters with annotated PKS products. The web server of the pipeline (SeMPI) is freely available at: http://www.pharmaceutical-bioinformatics.de/sempi. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Choosing a CD-ROM Network Solution.
ERIC Educational Resources Information Center
Doering, David
1996-01-01
Discusses issues to consider in selecting a CD-ROM network solution, including throughput (speed of data delivery), security, access, servers, key features, training, jukebox support, documentation, and licenses. Reviews software products offered by Novell, Around Technology, Micro Design, Smart Storage, Microtest, Meridian, CD-Connection,…
Analysis of practical backoff protocols for contention resolution with multiple servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, L.A.; MacKenzie, P.D.
Backoff protocols are probably the most widely used protocols for contention resolution in multiple access channels. In this paper, we analyze the stochastic behavior of backoff protocols for contention resolution among a set of clients and servers, each server being a multiple access channel that deals with contention like an Ethernet channel. We use the standard model in which each client generates requests for a given server according to a Bemoulli distribution with a specified mean. The client-server request rate of a system is the maximum over all client-server pairs (i, j) of the sum of all request rates associatedmore » with either client i or server j. Our main result is that any superlinear polynomial backoff protocol is stable for any multiple-server system with a sub-unit client-server request rate. We confirm the practical relevance of our result by demonstrating experimentally that the average waiting time of requests is very small when such a system is run with reasonably few clients and reasonably small request rates such as those that occur in actual ethernets. Our result is the first proof of stability for any backoff protocol for contention resolution with multiple servers. Our result is also the first proof that any weakly acknowledgment based protocol is stable for contention resolution with multiple servers and such high request rates. Two special cases of our result are of interest. Hastad, Leighton and Rogoff have shown that for a single-server system with a sub-unit client-server request rate any modified superlinear polynomial backoff protocol is stable. These modified backoff protocols are similar to standard backoff protocols but require more random bits to implement. The special case of our result in which there is only one server extends the result of Hastad, Leighton and Rogoff to standard (practical) backoff protocols. Finally, our result applies to dynamic routing in optical networks.« less
Naver: a PC-cluster-based VR system
NASA Astrophysics Data System (ADS)
Park, ChangHoon; Ko, HeeDong; Kim, TaiYun
2003-04-01
In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.
Twin-tailed fail-over for fileservers maintaining full performance in the presence of a failure
Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.
2008-02-12
A method for maintaining full performance of a file system in the presence of a failure is provided. The file system having N storage devices, where N is an integer greater than zero and N primary file servers where each file server is operatively connected to a corresponding storage device for accessing files therein. The file system further having a secondary file server operatively connected to at least one of the N storage devices. The method including: switching the connection of one of the N storage devices to the secondary file server upon a failure of one of the N primary file servers; and switching the connections of one or more of the remaining storage devices to a primary file server other than the failed file server as necessary so as to prevent a loss in performance and to provide each storage device with an operating file server.
Carroll, Adam J; Badger, Murray R; Harvey Millar, A
2010-07-14
Standardization of analytical approaches and reporting methods via community-wide collaboration can work synergistically with web-tool development to result in rapid community-driven expansion of online data repositories suitable for data mining and meta-analysis. In metabolomics, the inter-laboratory reproducibility of gas-chromatography/mass-spectrometry (GC/MS) makes it an obvious target for such development. While a number of web-tools offer access to datasets and/or tools for raw data processing and statistical analysis, none of these systems are currently set up to act as a public repository by easily accepting, processing and presenting publicly submitted GC/MS metabolomics datasets for public re-analysis. Here, we present MetabolomeExpress, a new File Transfer Protocol (FTP) server and web-tool for the online storage, processing, visualisation and statistical re-analysis of publicly submitted GC/MS metabolomics datasets. Users may search a quality-controlled database of metabolite response statistics from publicly submitted datasets by a number of parameters (eg. metabolite, species, organ/biofluid etc.). Users may also perform meta-analysis comparisons of multiple independent experiments or re-analyse public primary datasets via user-friendly tools for t-test, principal components analysis, hierarchical cluster analysis and correlation analysis. They may interact with chromatograms, mass spectra and peak detection results via an integrated raw data viewer. Researchers who register for a free account may upload (via FTP) their own data to the server for online processing via a novel raw data processing pipeline. MetabolomeExpress https://www.metabolome-express.org provides a new opportunity for the general metabolomics community to transparently present online the raw and processed GC/MS data underlying their metabolomics publications. Transparent sharing of these data will allow researchers to assess data quality and draw their own insights from published metabolomics datasets.
Experience with Adaptive Security Policies.
1998-03-01
3.1 Introduction r: 3.2 Logical Groupings of audited permission checks 29 3.3 Auditing of system servers via microkernel snooping 31 3.4...performed by servers other than the microkernel . Since altering each server to audit events would complicate the integration of new servers, a...modification to the microkernel was implemented to allow the microkernel to audit the requests made of other servers. Both methods for enhancing audit
ECFS: A decentralized, distributed and fault-tolerant FUSE filesystem for the LHCb online farm
NASA Astrophysics Data System (ADS)
Rybczynski, Tomasz; Bonaccorsi, Enrico; Neufeld, Niko
2014-06-01
The LHCb experiment records millions of proton collisions every second, but only a fraction of them are useful for LHCb physics. In order to filter out the "bad events" a large farm of x86-servers (~2000 nodes) has been put in place. These servers boot from and run from NFS, however they use their local disk to temporarily store data, which cannot be processed in real-time ("data-deferring"). These events are subsequently processed, when there are no live-data coming in. The effective CPU power is thus greatly increased. This gain in CPU power depends critically on the availability of the local disks. For cost and power-reasons, mirroring (RAID-1) is not used, leading to a lot of operational headache with failing disks and disk-errors or server failures induced by faulty disks. To mitigate these problems and increase the reliability of the LHCb farm, while at same time keeping cost and power-consumption low, an extensive research and study of existing highly available and distributed file systems has been done. While many distributed file systems are providing reliability by "file replication", none of the evaluated ones supports erasure algorithms. A decentralised, distributed and fault-tolerant "write once read many" file system has been designed and implemented as a proof of concept providing fault tolerance without using expensive - in terms of disk space - file replication techniques and providing a unique namespace as a main goals. This paper describes the design and the implementation of the Erasure Codes File System (ECFS) and presents the specialised FUSE interface for Linux. Depending on the encoding algorithm ECFS will use a certain number of target directories as a backend to store the segments that compose the encoded data. When target directories are mounted via nfs/autofs - ECFS will act as a file-system over network/block-level raid over multiple servers.
Munteanu, Cristian R; Pedreira, Nieves; Dorado, Julián; Pazos, Alejandro; Pérez-Montoto, Lázaro G; Ubeira, Florencio M; González-Díaz, Humberto
2014-04-01
Lectins (Ls) play an important role in many diseases such as different types of cancer, parasitic infections and other diseases. Interestingly, the Protein Data Bank (PDB) contains +3000 protein 3D structures with unknown function. Thus, we can in principle, discover new Ls mining non-annotated structures from PDB or other sources. However, there are no general models to predict new biologically relevant Ls based on 3D chemical structures. We used the MARCH-INSIDE software to calculate the Markov-Shannon 3D electrostatic entropy parameters for the complex networks of protein structure of 2200 different protein 3D structures, including 1200 Ls. We have performed a Linear Discriminant Analysis (LDA) using these parameters as inputs in order to seek a new Quantitative Structure-Activity Relationship (QSAR) model, which is able to discriminate 3D structure of Ls from other proteins. We implemented this predictor in the web server named LECTINPred, freely available at http://bio-aims.udc.es/LECTINPred.php. This web server showed the following goodness-of-fit statistics: Sensitivity=96.7 % (for Ls), Specificity=87.6 % (non-active proteins), and Accuracy=92.5 % (for all proteins), considering altogether both the training and external prediction series. In mode 2, users can carry out an automatic retrieval of protein structures from PDB. We illustrated the use of this server, in operation mode 1, performing a data mining of PDB. We predicted Ls scores for +2000 proteins with unknown function and selected the top-scored ones as possible lectins. In operation mode 2, LECTINPred can also upload 3D structural models generated with structure-prediction tools like LOMETS or PHYRE2. The new Ls are expected to be of relevance as cancer biomarkers or useful in parasite vaccine design. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology
NASA Astrophysics Data System (ADS)
Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; King, Todd; Hughes, John; Fung, Shing; Galkin, Ivan; Hapgood, Michael; Belehaki, Anna
2015-04-01
Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology European Union ESPAS, Japanese IUGONET and GFZ ISDC data server are developed for the ingestion, archiving and distributing of geo and space science domain data. Main parts of the data -managed by the mentioned data server- are related to near earth-space and geomagnetic field data. A smart mashup of the data server would allow a seamless browse and access to data and related context information. However the achievement of a high level of interoperability is a challenge because the data server are based on different data models and software frameworks. This paper is focused on the latest experiments and results for the mashup of the data server using the semantic Web approach. Besides the mashup of domain and terminological ontologies, especially the options to connect data managed by relational databases using D2R server and SPARQL technology will be addressed. A successful realization of the data server mashup will not only have a positive impact to the data users of the specific scientific domain but also to related projects, such as e.g. the development of a new interoperable version of NASA's Planetary Data System (PDS) or ICUS's World Data System alliance. ESPAS data server: https://www.espas-fp7.eu/portal/ IUGONET data server: http://search.iugonet.org/iugonet/ GFZ ISDC data server (semantic Web based prototype): http://rz-vm30.gfz-potsdam.de/drupal-7.9/ NASA PDS: http://pds.nasa.gov ICSU-WDS: https://www.icsu-wds.org
Honey Bee Colonies Remote Monitoring System.
Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús
2016-12-29
Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees' work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive-monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time.
Honey Bee Colonies Remote Monitoring System
Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús
2016-01-01
Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees’ work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive—monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time. PMID:28036061
Discovering causal signaling pathways through gene-expression patterns
Parikh, Jignesh R.; Klinger, Bertram; Xia, Yu; Marto, Jarrod A.; Blüthgen, Nils
2010-01-01
High-throughput gene-expression studies result in lists of differentially expressed genes. Most current meta-analyses of these gene lists include searching for significant membership of the translated proteins in various signaling pathways. However, such membership enrichment algorithms do not provide insight into which pathways caused the genes to be differentially expressed in the first place. Here, we present an intuitive approach for discovering upstream signaling pathways responsible for regulating these differentially expressed genes. We identify consistently regulated signature genes specific for signal transduction pathways from a panel of single-pathway perturbation experiments. An algorithm that detects overrepresentation of these signature genes in a gene group of interest is used to infer the signaling pathway responsible for regulation. We expose our novel resource and algorithm through a web server called SPEED: Signaling Pathway Enrichment using Experimental Data sets. SPEED can be freely accessed at http://speed.sys-bio.net/. PMID:20494976
Triple-server blind quantum computation using entanglement swapping
NASA Astrophysics Data System (ADS)
Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua
2014-04-01
Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.
NASA Technical Reports Server (NTRS)
Lyle, Stacey D.
2009-01-01
A software package that has been designed to allow authentication for determining if the rover(s) is/are within a set of boundaries or a specific area to access critical geospatial information by using GPS signal structures as a means to authenticate mobile devices into a network wirelessly and in real-time. The advantage lies in that the system only allows those with designated geospatial boundaries or areas into the server.
Cross-Layer Resilience Exploration
2015-03-31
complex 563 server-class systems) and any arbitrary fault model (permanent, transient, multi-bit, etc.) System Design Analysis Using flip- flop ...level fault injection, we rank the vulnerability of each flip- flop in the processor in terms of its likelihood to propagate faults [3]. This allows the...hardened flip- flops , which are flip- flops designed to uphold the bit representation of their output circuit even under particle strikes [1, 6, 10
Code of Federal Regulations, 2011 CFR
2011-10-01
... ENVIRONMENTAL, CONSERVATION, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Energy-Efficient Computer Equipment 1523.7002 Waivers. (a) There are several types of computer equipment which technically fall under the... types of equipment: (1) LAN servers, including file servers; application servers; communication servers...
Code of Federal Regulations, 2012 CFR
2012-10-01
... ENVIRONMENTAL, CONSERVATION, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Energy-Efficient Computer Equipment 1523.7002 Waivers. (a) There are several types of computer equipment which technically fall under the... types of equipment: (1) LAN servers, including file servers; application servers; communication servers...
Code of Federal Regulations, 2010 CFR
2010-10-01
... ENVIRONMENTAL, CONSERVATION, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Energy-Efficient Computer Equipment 1523.7002 Waivers. (a) There are several types of computer equipment which technically fall under the... types of equipment: (1) LAN servers, including file servers; application servers; communication servers...
76 FR 11433 - Federal Transition To Secure Hash Algorithm (SHA)-256
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... generating digital signatures. Current information systems, Web servers, applications and workstation operating systems were designed to process, and use SHA-1 generated signatures. National Institute of... cryptographic keys, and more robust algorithms by December 2013. Government systems may begin to encounter...
Do You Know where Your Data Are?
ERIC Educational Resources Information Center
Bennett, Cedric
2006-01-01
Many of the information security appliances, devices, and techniques currently in use are designed to keep unwanted users and Internet traffic away from important information assets by denying unauthorized access to servers, databases, networks, storage media, and other underlying technology resources. These approaches employ firewalls, intrusion…
Concept of operations for the use of connected vehicle data in road weather applications.
DOT National Transportation Integrated Search
2006-01-30
The Computer Aided Dispatch (CAD) computer system went into live operation January 2002. System design involved creating a distributed network, which involved setting up a central main server at the Idaho State Police (ISP) headquarters located in Me...
A Multiserver Biometric Authentication Scheme for TMIS using Elliptic Curve Cryptography.
Chaudhry, Shehzad Ashraf; Khan, Muhammad Tawab; Khan, Muhammad Khurram; Shon, Taeshik
2016-11-01
Recently several authentication schemes are proposed for telecare medicine information system (TMIS). Many of such schemes are proved to have weaknesses against known attacks. Furthermore, numerous such schemes cannot be used in real time scenarios. Because they assume a single server for authentication across the globe. Very recently, Amin et al. (J. Med. Syst. 39(11):180, 2015) designed an authentication scheme for secure communication between a patient and a medical practitioner using a trusted central medical server. They claimed their scheme to extend all security requirements and emphasized the efficiency of their scheme. However, the analysis in this article proves that the scheme designed by Amin et al. is vulnerable to stolen smart card and stolen verifier attacks. Furthermore, their scheme is having scalability issues along with inefficient password change and password recovery phases. Then we propose an improved scheme. The proposed scheme is more practical, secure and lightweight than Amin et al.'s scheme. The security of proposed scheme is proved using the popular automated tool ProVerif.
Zhang, Melvyn W B; Ho, Roger C M
2017-01-01
Smartphones and their accompanying applications are currently widely utilized in various healthcare interventions. Prior to the deployment of these tools for healthcare intervention, typically, proof of concept feasibility studies, as well as randomized trials are conducted to determine that these tools are efficacious prior to their actual implementation. In the field of psychiatry, most of the current interventions seek to compare smartphone based intervention against conventional care. There remains a paucity of research evaluating different forms of interventions using a single smartphone application. In the field of nutrition, there has been recent pioneering research demonstrating how a multi-phasic randomized controlled trial could be conducted using a single smartphone application. Despite the innovativeness of the previous smartphone conceptualization, there remains a paucity of technical information underlying the conceptualization that would support a multi-phasic interventional trial. It is thus the aim of the current technical note to share insights into an innovative server design that would enable the delivery of multi-phasic trials.
WEBSLIDE: A "Virtual" Slide Projector Based on World Wide Web
NASA Astrophysics Data System (ADS)
Barra, Maria; Ferrandino, Salvatore; Scarano, Vittorio
1999-03-01
We present here the design key concepts of WEBSLIDE, a software project whose objective is to provide a simple, cheap and efficient solution for showing slides during lessons in computer labs. In fact, WEBSLIDE allows the video monitors of several client machines (the "STUDENTS") to be synchronously updated by the actions of a particular client machine, called the "INSTRUCTOR." The system is based on the World Wide Web and the software components of WEBSLIDE mainly consists in a WWW server, browsers and small Cgi-Bill scripts. What makes WEBSLIDE particularly appealing for small educational institutions is that WEBSLIDE is built with "off the shelf" products: it does not involve using a specifically designed program but any Netscape browser, one of the most popular browsers available on the market, is sufficient. Another possible use is to use our system to implement "guided automatic tours" through several pages or Intranets internal news bulletins: the company Web server can broadcast to all employees relevant information on their browser.
User-centric incentive design for participatory mobile phone sensing
NASA Astrophysics Data System (ADS)
Gao, Wei; Lu, Haoyang
2014-05-01
Mobile phone sensing is a critical underpinning of pervasive mobile computing, and is one of the key factors for improving people's quality of life in modern society via collective utilization of the on-board sensing capabilities of people's smartphones. The increasing demands for sensing services and ambient awareness in mobile environments highlight the necessity of active participation of individual mobile users in sensing tasks. User incentives for such participation have been continuously offered from an application-centric perspective, i.e., as payments from the sensing server, to compensate users' sensing costs. These payments, however, are manipulated to maximize the benefits of the sensing server, ignoring the runtime flexibility and benefits of participating users. This paper presents a novel framework of user-centric incentive design, and develops a universal sensing platform which translates heterogenous sensing tasks to a generic sensing plan specifying the task-independent requirements of sensing performance. We use this sensing plan as input to reduce three categories of sensing costs, which together cover the possible sources hindering users' participation in sensing.
NASA Astrophysics Data System (ADS)
Ismail, Zurina; Shokor, Shahrul Suhaimi AB
2016-03-01
Rapid life time change of the Malaysian lifestyle had served the overwhelming growth in the service operation industry. On that occasion, this paper will provide the idea to improve the waiting line system (WLS) practices in Malaysia fast food chains. The study will compare the results in between the single server single phase (SSSP) and the single server multi-phase (SSMP) which providing Markovian Queuing (MQ) to be used for analysis. The new system will improve the current WLS, plus intensifying the organization performance. This new WLS were designed and tested in a real case scenario and in order to develop and implemented the new styles, it need to be focusing on the average number of customers (ANC), average number of customer spending time waiting in line (ACS), and the average time customers spend in waiting and being served (ABS). We introduced new WLS design and there will be prompt discussion upon theories of benefits and potential issues that will benefit other researchers.
DEMS - a second generation diabetes electronic management system.
Gorman, C A; Zimmerman, B R; Smith, S A; Dinneen, S F; Knudsen, J B; Holm, D; Jorgensen, B; Bjornsen, S; Planet, K; Hanson, P; Rizza, R A
2000-06-01
Diabetes electronic management system (DEMS) is a component-based client/server application, written in Visual C++ and Visual Basic, with the database server running Sybase System 11. DEMS is built entirely with a combination of dynamic link libraries (DLLs) and ActiveX components - the only exception is the DEMS.exe. DEMS is a chronic disease management system for patients with diabetes. It is used at the point of care by all members of the diabetes team including physicians, nurses, dieticians, clinical assistants and educators. The system is designed for maximum clinical efficiency and facilitates appropriately supervised delegation of care. Dispersed clinical sites may be supervised from a central location. The system is designed for ease of navigation; immediate provision of many types of automatically generated reports; quality audits; aids to compliance with good care guidelines; and alerts, advisories, prompts, and warnings that guide the care provider. The system now contains data on over 34000 patients and is in daily use at multiple sites.
Tele-healthcare for diabetes management: A low cost automatic approach.
Benaissa, M; Malik, B; Kanakis, A; Wright, N P
2012-01-01
In this paper, a telemedicine system for managing diabetic patients with better care is presented. The system is an end to end solution which relies on the integration of front end (patient unit) and backend web server. A key feature of the system developed is the very low cost automated approach. The front-end of the system is capable of reading glucose measurements from any glucose meter and sending them automatically via existing networks to the back-end server. The back-end is designed and developed using n-tier web client architecture based on model-view-controller design pattern using open source technology, a cost effective solution. The back-end helps the health-care provider with data analysis; data visualization and decision support, and allows them to send feedback and therapeutic advice to patients from anywhere using a browser enabled device. This system will be evaluated during the trials which will be conducted in collaboration with a local hospital in phased manner.
New Web Server - the Java Version of Tempest - Produced
NASA Technical Reports Server (NTRS)
York, David W.; Ponyik, Joseph G.
2000-01-01
A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.
Design of Instant Messaging System of Multi-language E-commerce Platform
NASA Astrophysics Data System (ADS)
Yang, Heng; Chen, Xinyi; Li, Jiajia; Cao, Yaru
2017-09-01
This paper aims at researching the message system in the instant messaging system based on the multi-language e-commerce platform in order to design the instant messaging system in multi-language environment and exhibit the national characteristics based information as well as applying national languages to e-commerce. In order to develop beautiful and friendly system interface for the front end of the message system and reduce the development cost, the mature jQuery framework is adopted in this paper. The high-performance server Tomcat is adopted at the back end to process user requests, and MySQL database is adopted for data storage to persistently store user data, and meanwhile Oracle database is adopted as the message buffer for system optimization. Moreover, AJAX technology is adopted for the client to actively pull the newest data from the server at the specified time. In practical application, the system has strong reliability, good expansibility, short response time, high system throughput capacity and high user concurrency.
WeBIAS: a web server for publishing bioinformatics applications.
Daniluk, Paweł; Wilczyński, Bartek; Lesyng, Bogdan
2015-11-02
One of the requirements for a successful scientific tool is its availability. Developing a functional web service, however, is usually considered a mundane and ungratifying task, and quite often neglected. When publishing bioinformatic applications, such attitude puts additional burden on the reviewers who have to cope with poorly designed interfaces in order to assess quality of presented methods, as well as impairs actual usefulness to the scientific community at large. In this note we present WeBIAS-a simple, self-contained solution to make command-line programs accessible through web forms. It comprises a web portal capable of serving several applications and backend schedulers which carry out computations. The server handles user registration and authentication, stores queries and results, and provides a convenient administrator interface. WeBIAS is implemented in Python and available under GNU Affero General Public License. It has been developed and tested on GNU/Linux compatible platforms covering a vast majority of operational WWW servers. Since it is written in pure Python, it should be easy to deploy also on all other platforms supporting Python (e.g. Windows, Mac OS X). Documentation and source code, as well as a demonstration site are available at http://bioinfo.imdik.pan.pl/webias . WeBIAS has been designed specifically with ease of installation and deployment of services in mind. Setting up a simple application requires minimal effort, yet it is possible to create visually appealing, feature-rich interfaces for query submission and presentation of results.
A Python object-oriented framework for the CMS alignment and calibration data
NASA Astrophysics Data System (ADS)
Dawes, Joshua H.; CMS Collaboration
2017-10-01
The Alignment, Calibrations and Databases group at the CMS Experiment delivers Alignment and Calibration Conditions Data to a large set of workflows which process recorded event data and produce simulated events. The current infrastructure for releasing and consuming Conditions Data was designed in the two years of the first LHC long shutdown to respond to use cases from the preceding data-taking period. During the second run of the LHC, new use cases were defined. For the consumption of Conditions Metadata, no common interface existed for the detector experts to use in Python-based custom scripts, resulting in many different querying and transaction management patterns. A new framework has been built to address such use cases: a simple object-oriented tool that detector experts can use to read and write Conditions Metadata when using Oracle and SQLite databases, that provides a homogeneous method of querying across all services. The tool provides mechanisms for segmenting large sets of conditions while releasing them to the production database, allows for uniform error reporting to the client-side from the server-side and optimizes the data transfer to the server. The architecture of the new service has been developed exploiting many of the features made available by the metadata consumption framework to implement the required improvements. This paper presents the details of the design and implementation of the new metadata consumption and data upload framework, as well as analyses of the new upload service’s performance as the server-side state varies.
Land User and Land Cover Maps of Europe: a Webgis Platform
NASA Astrophysics Data System (ADS)
Brovelli, M. A.; Fahl, F. C.; Minghini, M.; Molinari, M. E.
2016-06-01
This paper presents the methods and implementation processes of a WebGIS platform designed to publish the available land use and land cover maps of Europe at continental scale. The system is built completely on open source infrastructure and open standards. The proposed architecture is based on a server-client model having GeoServer as the map server, Leaflet as the client-side mapping library and the Bootstrap framework at the core of the front-end user interface. The web user interface is designed to have typical features of a desktop GIS (e.g. activate/deactivate layers and order layers by drag and drop actions) and to show specific information on the activated layers (e.g. legend and simplified metadata). Users have the possibility to change the base map from a given list of map providers (e.g. OpenStreetMap and Microsoft Bing) and to control the opacity of each layer to facilitate the comparison with both other land cover layers and the underlying base map. In addition, users can add to the platform any custom layer available through a Web Map Service (WMS) and activate the visualization of photos from popular photo sharing services. This last functionality is provided in order to have a visual assessment of the available land coverages based on other user-generated contents available on the Internet. It is supposed to be a first step towards a calibration/validation service that will be made available in the future.
UAF: a generic OPC unified architecture framework
NASA Astrophysics Data System (ADS)
Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans
2012-09-01
As an emerging Service Oriented Architecture (SOA) specically designed for industrial automation and process control, the OPC Unied Architecture specication should be regarded as an attractive candidate for controlling scientic instrumentation. Even though an industry-backed standard such as OPC UA can oer substantial added value to these projects, its inherent complexity poses an important obstacle for adopting the technology. Building OPC UA applications requires considerable eort, even when taking advantage of a COTS Software Development Kit (SDK). The OPC Unied Architecture Framework (UAF) attempts to reduce this burden by introducing an abstraction layer between the SDK and the application code in order to achieve a better separation of the technical and the functional concerns. True to its industrial origin, the primary requirement of the framework is to maintain interoperability by staying close to the standard specications, and by expecting the minimum compliance from other OPC UA servers and clients. UAF can therefore be regarded as a software framework to quickly and comfortably develop and deploy OPC UA-based applications, while remaining compatible to third party OPC UA-compliant toolkits, servers (such as PLCs) and clients (such as SCADA software). In the rst phase, as covered by this paper, only the client-side of UAF has been tackled in order to transparently handle discovery, session management, subscriptions, monitored items etc. We describe the design principles and internal architecture of our open-source software project, the rst results of the framework running at the Mercator Telescope, and we give a preview of the planned server-side implementation.
An Optimization of the Basic School Military Occupational Skill Assignment Process
2003-06-01
Corps Intranet (NMCI)23 supports it. We evaluated the use of Microsoft’s SQL Server, but dismissed this after learning that TBS did not possess a SQL ...Server license or a qualified SQL Server administrator.24 SQL Server would have provided for additional security measures not available in MS...administrator. Although not has powerful as SQL Server, MS Access can handle the multi-user environment necessary for this system.25 The training
Electronic document distribution: Design of the anonymous FTP Langley Technical Report Server
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Gottlich, Gretchen L.
1994-01-01
An experimental electronic dissemination project, the Langley Technical Report Server (LTRS), has been undertaken to determine the feasibility of delivering Langley technical reports directly to the desktops of researchers worldwide. During the first six months, over 4700 accesses occurred and over 2400 technical reports were distributed. This usage indicates the high level of interest that researchers have in performing literature searches and retrieving technical reports at their desktops. The initial system was developed with existing resources and technology. The reports are stored as files on an inexpensive UNIX workstation and are accessible over the Internet. This project will serve as a foundation for ongoing projects at other NASA centers that will allow for greater access to NASA technical reports.
Self-Powered WSN for Distributed Data Center Monitoring
Brunelli, Davide; Passerone, Roberto; Rizzon, Luca; Rossi, Maurizio; Sartori, Davide
2016-01-01
Monitoring environmental parameters in data centers is gathering nowadays increasing attention from industry, due to the need of high energy efficiency of cloud services. We present the design and the characterization of an energy neutral embedded wireless system, prototyped to monitor perpetually environmental parameters in servers and racks. It is powered by an energy harvesting module based on Thermoelectric Generators, which converts the heat dissipation from the servers. Starting from the empirical characterization of the energy harvester, we present a power conditioning circuit optimized for the specific application. The whole system has been enhanced with several sensors. An ultra-low-power micro-controller stacked over the energy harvesting provides an efficient power management. Performance have been assessed and compared with the analytical model for validation. PMID:26729135
A Web Server for MACCS Magnetometer Data
NASA Technical Reports Server (NTRS)
Engebretson, Mark J.
1998-01-01
NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.
Modeling, Simulation and Analysis of Public Key Infrastructure
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)
1998-01-01
Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.
Modeling And Simulation Of Multimedia Communication Networks
NASA Astrophysics Data System (ADS)
Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.
1989-05-01
In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.
A mobile information management system used in textile enterprises
NASA Astrophysics Data System (ADS)
Huang, C.-R.; Yu, W.-D.
2008-02-01
The mobile information management system (MIMS) for textile enterprises is based on Microsoft Visual Studios. NET2003 Server, Microsoft SQL Server 2000, C++ language and wireless application protocol (WAP) and wireless markup language (WML) technology. The portable MIMS is composed of three-layer structures, i.e. showing layer; operating layer; and data visiting layer corresponding to the port-link module; processing module; and database module. By using the MIMS, not only the information exchanges become more convenient and easier, but also the compatible between the giant information capacity and a micro-cell phone and functional expansion nature in operating and designing can be realized by means of build-in units. The development of MIMS is suitable for the utilization in textile enterprises.
Self-Powered WSN for Distributed Data Center Monitoring.
Brunelli, Davide; Passerone, Roberto; Rizzon, Luca; Rossi, Maurizio; Sartori, Davide
2016-01-02
Monitoring environmental parameters in data centers is gathering nowadays increasing attention from industry, due to the need of high energy efficiency of cloud services. We present the design and the characterization of an energy neutral embedded wireless system, prototyped to monitor perpetually environmental parameters in servers and racks. It is powered by an energy harvesting module based on Thermoelectric Generators, which converts the heat dissipation from the servers. Starting from the empirical characterization of the energy harvester, we present a power conditioning circuit optimized for the specific application. The whole system has been enhanced with several sensors. An ultra-low-power micro-controller stacked over the energy harvesting provides an efficient power management. Performance have been assessed and compared with the analytical model for validation.
NASA Astrophysics Data System (ADS)
Sasikala, S.; Indhira, K.; Chandrasekaran, V. M.
2017-11-01
In this paper, we have considered an MX / (a,b) / 1 queueing system with server breakdown without interruption, multiple vacations, setup times and N-policy. After a batch of service, if the size of the queue is ξ (< a), then the server immediately takes a vacation. Upon returns from a vacation, if the queue is less than N, then the server takes another vacation. This process continues until the server finds atleast N customers in the queue. After a vacation, if the server finds at least N customers waiting for service, then the server needs a setup time to start the service. After a batch of service, if the amount of waiting customers in the queue is ξ (≥ a) then the server serves a batch of min(ξ,b) customers, where b ≥ a. We derived the probability generating function of queue length at arbitrary time epoch. Further, we obtained some important performance measures.
Secure entanglement distillation for double-server blind quantum computation.
Morimae, Tomoyuki; Fujii, Keisuke
2013-07-12
Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client's input, output, and program. If the client interacts with only a single server, the client has to have some minimum quantum power, such as the ability of emitting randomly rotated single-qubit states or the ability of measuring states. If the client interacts with two servers who share Bell pairs but cannot communicate with each other, the client can be completely classical. For such a double-server scheme, two servers have to share clean Bell pairs, and therefore the entanglement distillation is necessary in a realistic noisy environment. In this Letter, we show that it is possible to perform entanglement distillation in the double-server scheme without degrading the security of blind quantum computing.
[Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].
Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu
2015-09-01
By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.
Wu, Shao-Min; Liu, Hsuan; Huang, Po-Jung; Chang, Ian Yi-Feng; Lee, Chi-Ching; Yang, Chia-Yu; Tsai, Wen-Sy; Tan, Bertrand Chin-Ming
2018-01-01
Despite their lack of protein-coding potential, long noncoding RNAs (lncRNAs) and circular RNAs (circRNAs) have emerged as key determinants in gene regulation, acting to fine-tune transcriptional and signaling output. These noncoding RNA transcripts are known to affect expression of messenger RNAs (mRNAs) via epigenetic and post-transcriptional regulation. Given their widespread target spectrum, as well as extensive modes of action, a complete understanding of their biological relevance will depend on integrative analyses of systems data at various levels. While a handful of publicly available databases have been reported, existing tools do not fully capture, from a network perspective, the functional implications of lncRNAs or circRNAs of interest. Through an integrated and streamlined design, circlncRNAnet aims to broaden the understanding of ncRNA candidates by testing in silico several hypotheses of ncRNA-based functions, on the basis of large-scale RNA-seq data. This web server is implemented with several features that represent advances in the bioinformatics of ncRNAs: (1) a flexible framework that accepts and processes user-defined next-generation sequencing-based expression data; (2) multiple analytic modules that assign and productively assess the regulatory networks of user-selected ncRNAs by cross-referencing extensively curated databases; (3) an all-purpose, information-rich workflow design that is tailored to all types of ncRNAs. Outputs on expression profiles, co-expression networks and pathways, and molecular interactomes, are dynamically and interactively displayed according to user-defined criteria. In short, users may apply circlncRNAnet to obtain, in real time, multiple lines of functionally relevant information on circRNAs/lncRNAs of their interest. In summary, circlncRNAnet provides a "one-stop" resource for in-depth analyses of ncRNA biology. circlncRNAnet is freely available at http://app.cgu.edu.tw/circlnc/. © The Authors 2017. Published by Oxford University Press.
SciServer Compute brings Analysis to Big Data in the Cloud
NASA Astrophysics Data System (ADS)
Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara
2016-06-01
SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.
The EarthServer Federation: State, Role, and Contribution to GEOSS
NASA Astrophysics Data System (ADS)
Merticariu, Vlad; Baumann, Peter
2016-04-01
The intercontinental EarthServer initiative has established a European datacube platform with proven scalability: known databases exceed 100 TB, and single queries have been split across more than 1,000 cloud nodes. Its service interface being rigorously based on the OGC "Big Geo Data" standards, Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS), a series of clients can dock into the services, ranging from open-source OpenLayers and QGIS over open-source NASA WorldWind to proprietary ESRI ArcGIS. Datacube fusion in a "mix and match" style is supported by the platform technolgy, the rasdaman Array Database System, which transparently federates queries so that users simply approach any node of the federation to access any data item, internally optimized for minimal data transfer. Notably, rasdaman is part of GEOSS GCI. NASA is contributing its Web WorldWind virtual globe for user-friendly data extraction, navigation, and analysis. Integrated datacube / metadata queries are contributed by CITE. Current federation members include ESA (managed by MEEO sr.l.), Plymouth Marine Laboratory (PML), the European Centre for Medium-Range Weather Forecast (ECMWF), Australia's National Computational Infrastructure, and Jacobs University (adding in Planetary Science). Further data centers have expressed interest in joining. We present the EarthServer approach, discuss its underlying technology, and illustrate the contribution this datacube platform can make to GEOSS.
jSPyDB, an open source database-independent tool for data management
NASA Astrophysics Data System (ADS)
Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo
2011-12-01
Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.
Cardio-PACs: a new opportunity
NASA Astrophysics Data System (ADS)
Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary
2000-05-01
It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.
Validating metal binding sites in macromolecule structures using the CheckMyMetal web server
Zheng, Heping; Chordia, Mahendra D.; Cooper, David R.; Chruszcz, Maksymilian; Müller, Peter; Sheldrick, George M.
2015-01-01
Metals play vital roles in both the mechanism and architecture of biological macromolecules. Yet structures of metal-containing macromolecules where metals are misidentified and/or suboptimally modeled are abundant in the Protein Data Bank (PDB). This shows the need for a diagnostic tool to identify and correct such modeling problems with metal binding environments. The "CheckMyMetal" (CMM) web server (http://csgid.org/csgid/metal_sites/) is a sophisticated, user-friendly web-based method to evaluate metal binding sites in macromolecular structures in respect to 7350 metal binding sites observed in a benchmark dataset of 2304 high resolution crystal structures. The protocol outlines how the CMM server can be used to detect geometric and other irregularities in the structures of metal binding sites and alert researchers to potential errors in metal assignment. The protocol also gives practical guidelines for correcting problematic sites by modifying the metal binding environment and/or redefining metal identity in the PDB file. Several examples where this has led to meaningful results are described in the anticipated results section. CMM was designed for a broad audience—biomedical researchers studying metal-containing proteins and nucleic acids—but is equally well suited for structural biologists to validate new structures during modeling or refinement. The CMM server takes the coordinates of a metal-containing macromolecule structure in the PDB format as input and responds within a few seconds for a typical protein structure modeled with a few hundred amino acids. PMID:24356774
Providing Internet Access to High-Resolution Lunar Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
Wide Area Information Servers: An Executive Information System for Unstructured Files.
ERIC Educational Resources Information Center
Kahle, Brewster; And Others
1992-01-01
Describes the Wide Area Information Servers (WAIS) system, an integrated information retrieval system for corporate end users. Discussion covers general characteristics of the system, search techniques, protocol development, user interfaces, servers, selective dissemination of information, nontextual data, access to other servers, and description…
Parallel Computing Using Web Servers and "Servlets".
ERIC Educational Resources Information Center
Lo, Alfred; Bloor, Chris; Choi, Y. K.
2000-01-01
Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…
Asynchronous data change notification between database server and accelerator controls system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, W.; Morris, J.; Nemesure, S.
2011-10-10
Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to anymore » client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.« less
Kim, Hyerin; Kang, NaNa; An, KyuHyeon; Koo, JaeHyung; Kim, Min-Soo
2016-07-08
Design of high-quality primers for multiple target sequences is essential for qPCR experiments, but is challenging due to the need to consider both homology tests on off-target sequences and the same stringent filtering constraints on the primers. Existing web servers for primer design have major drawbacks, including requiring the use of BLAST-like tools for homology tests, lack of support for ranking of primers, TaqMan probes and simultaneous design of primers against multiple targets. Due to the large-scale computational overhead, the few web servers supporting homology tests use heuristic approaches or perform homology tests within a limited scope. Here, we describe the MRPrimerW, which performs complete homology testing, supports batch design of primers for multi-target qPCR experiments, supports design of TaqMan probes and ranks the resulting primers to return the top-1 best primers to the user. To ensure high accuracy, we adopted the core algorithm of a previously reported MapReduce-based method, MRPrimer, but completely redesigned it to allow users to receive query results quickly in a web interface, without requiring a MapReduce cluster or a long computation. MRPrimerW provides primer design services and a complete set of 341 963 135 in silico validated primers covering 99% of human and mouse genes. Free access: http://MRPrimerW.com. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Design and Development of a Network-Based Electronic Library.
ERIC Educational Resources Information Center
Larson, Ray R.
1994-01-01
Describes collaboration between the University of California at Berkeley and four other universities to develop interoperable servers containing each participant's Computer Science Technical Reports and to make them available over the Internet using standard protocols. The proposed library architecture, approaches to indexing and retrieval, and…
Stream On: Video Servers in the Real World.
ERIC Educational Resources Information Center
Tristram, Claire
1995-01-01
Despite plans for corporate training networks, digital ad-insertion systems, hotel video-on-demand, and interactive television, only small scale video networks presently work. Four case studies examine the design and implementation decisions for different markets: corporate; advertising; hotel; and commercial video via cable, satellite or…
CICS Region Virtualization for Cost Effective Application Development
ERIC Educational Resources Information Center
Khan, Kamal Waris
2012-01-01
Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…
MATREX Leads the Way in Implementing New DOD VV&A Documentation Standards
2007-05-24
Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems Acquisition Concept...Communications Human Performance Model • C3GRID – Command & Control, Computer GRID • CES – Communications Effects Server • CMS2 – Comprehensive
[Design and implementation of field questionnaire survey system of taeniasis/cysticercosis].
Huan-Zhang, Li; Jing-Bo, Xue; Men-Bao, Qian; Xin-Zhong, Zang; Shang, Xia; Qiang, Wang; Ying-Dan, Chen; Shi-Zhu, Li
2018-04-17
A taeniasis/cysticercosis information management system was designed to achieve the dynamic monitoring of the epidemic situation of taeniasis/cysticercosis and improve the intelligence level of disease information management. The system includes three layer structures (application layer, technical core layer, and data storage layer) and designs a datum transmission and remote communication system of traffic information tube in Browser/Server architecture. The system is believed to promote disease datum collection. Additionally, the system may provide the standardized data for convenience of datum analysis.
StaRProtein, A Web Server for Prediction of the Stability of Repeat Proteins
Xu, Yongtao; Zhou, Xu; Huang, Meilan
2015-01-01
Repeat proteins have become increasingly important due to their capability to bind to almost any proteins and the potential as alternative therapy to monoclonal antibodies. In the past decade repeat proteins have been designed to mediate specific protein-protein interactions. The tetratricopeptide and ankyrin repeat proteins are two classes of helical repeat proteins that form different binding pockets to accommodate various partners. It is important to understand the factors that define folding and stability of repeat proteins in order to prioritize the most stable designed repeat proteins to further explore their potential binding affinities. Here we developed distance-dependant statistical potentials using two classes of alpha-helical repeat proteins, tetratricopeptide and ankyrin repeat proteins respectively, and evaluated their efficiency in predicting the stability of repeat proteins. We demonstrated that the repeat-specific statistical potentials based on these two classes of repeat proteins showed paramount accuracy compared with non-specific statistical potentials in: 1) discriminate correct vs. incorrect models 2) rank the stability of designed repeat proteins. In particular, the statistical scores correlate closely with the equilibrium unfolding free energies of repeat proteins and therefore would serve as a novel tool in quickly prioritizing the designed repeat proteins with high stability. StaRProtein web server was developed for predicting the stability of repeat proteins. PMID:25807112
SMART Careplan System for Continuum of Care
Kim, Young Ah; Jang, Seon Young; Ahn, Meejung; Kim, Kyung Duck
2015-01-01
Objectives This paper describes the integrated Careplan system, designed to manage and utilize the existing Electronic Medical Record (EMR) system; the system also defines key items for interdisciplinary communication and continuity of patient care. Methods We structured the Careplan system to provide effective interdisciplinary communication for healthcare services. The design of the Careplan system architecture proceeded in four steps-defining target datasets; construction of conceptual framework and architecture; screen layout and storyboard creation; screen user interface (UI) design and development, and pilot test and step-by-step deployment. This Careplan system architecture consists of two parts, a server-side and client-side area. On the server-side, it performs the roles of data retrieval and storage from target EMRs. Furthermore, it performs the role of sending push notifications to the client depending on the careplan series. Also, the Careplan system provides various convenient modules to easily enter an individual careplan. Results Currently, Severance Hospital operates the Careplan system and provides a stable service dealing with dynamic changes (e.g., domestic medical certification, the Joint Commission International guideline) of EMR. Conclusions The Careplan system should go hand in hand with key items for strengthening interdisciplinary communication and information sharing within the EMR environment. A well-designed Careplan system can enhance user satisfaction and completed performance. PMID:25705559
SMART Careplan System for Continuum of Care.
Kim, Young Ah; Jang, Seon Young; Ahn, Meejung; Kim, Kyung Duck; Kim, Sung Soo
2015-01-01
This paper describes the integrated Careplan system, designed to manage and utilize the existing Electronic Medical Record (EMR) system; the system also defines key items for interdisciplinary communication and continuity of patient care. We structured the Careplan system to provide effective interdisciplinary communication for healthcare services. The design of the Careplan system architecture proceeded in four steps-defining target datasets; construction of conceptual framework and architecture; screen layout and storyboard creation; screen user interface (UI) design and development, and pilot test and step-by-step deployment. This Careplan system architecture consists of two parts, a server-side and client-side area. On the server-side, it performs the roles of data retrieval and storage from target EMRs. Furthermore, it performs the role of sending push notifications to the client depending on the careplan series. Also, the Careplan system provides various convenient modules to easily enter an individual careplan. Currently, Severance Hospital operates the Careplan system and provides a stable service dealing with dynamic changes (e.g., domestic medical certification, the Joint Commission International guideline) of EMR. The Careplan system should go hand in hand with key items for strengthening interdisciplinary communication and information sharing within the EMR environment. A well-designed Careplan system can enhance user satisfaction and completed performance.
RNAiFold 2.0: a web server and software to design custom and Rfam-based RNA molecules.
Garcia-Martin, Juan Antonio; Dotu, Ivan; Clote, Peter
2015-07-01
Several algorithms for RNA inverse folding have been used to design synthetic riboswitches, ribozymes and thermoswitches, whose activity has been experimentally validated. The RNAiFold software is unique among approaches for inverse folding in that (exhaustive) constraint programming is used instead of heuristic methods. For that reason, RNAiFold can generate all sequences that fold into the target structure or determine that there is no solution. RNAiFold 2.0 is a complete overhaul of RNAiFold 1.0, rewritten from the now defunct COMET language to C++. The new code properly extends the capabilities of its predecessor by providing a user-friendly pipeline to design synthetic constructs having the functionality of given Rfam families. In addition, the new software supports amino acid constraints, even for proteins translated in different reading frames from overlapping coding sequences; moreover, structure compatibility/incompatibility constraints have been expanded. With these features, RNAiFold 2.0 allows the user to design single RNA molecules as well as hybridization complexes of two RNA molecules. the web server, source code and linux binaries are publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold2.0. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
CRISPR-FOCUS: A web server for designing focused CRISPR screening experiments.
Cao, Qingyi; Ma, Jian; Chen, Chen-Hao; Xu, Han; Chen, Zhi; Li, Wei; Liu, X Shirley
2017-01-01
The recently developed CRISPR screen technology, based on the CRISPR/Cas9 genome editing system, enables genome-wide interrogation of gene functions in an efficient and cost-effective manner. Although many computational algorithms and web servers have been developed to design single-guide RNAs (sgRNAs) with high specificity and efficiency, algorithms specifically designed for conducting CRISPR screens are still lacking. Here we present CRISPR-FOCUS, a web-based platform to search and prioritize sgRNAs for CRISPR screen experiments. With official gene symbols or RefSeq IDs as the only mandatory input, CRISPR-FOCUS filters and prioritizes sgRNAs based on multiple criteria, including efficiency, specificity, sequence conservation, isoform structure, as well as genomic variations including Single Nucleotide Polymorphisms and cancer somatic mutations. CRISPR-FOCUS also provides pre-defined positive and negative control sgRNAs, as well as other necessary sequences in the construct (e.g., U6 promoters to drive sgRNA transcription and RNA scaffolds of the CRISPR/Cas9). These features allow users to synthesize oligonucleotides directly based on the output of CRISPR-FOCUS. Overall, CRISPR-FOCUS provides a rational and high-throughput approach for sgRNA library design that enables users to efficiently conduct a focused screen experiment targeting up to thousands of genes. (CRISPR-FOCUS is freely available at http://cistrome.org/crispr-focus/).
Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang
2016-01-01
Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.
Opus: A Coordination Language for Multidisciplinary Applications
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Haines, Matthew; Mehrotra, Piyush; Zima, Hans; vanRosendale, John
1997-01-01
Data parallel languages, such as High Performance fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.
GRAMM-X public web server for protein–protein docking
Tovchigrechko, Andrey; Vakser, Ilya A.
2006-01-01
Protein docking software GRAMM-X and its web interface () extend the original GRAMM Fast Fourier Transformation methodology by employing smoothed potentials, refinement stage, and knowledge-based scoring. The web server frees users from complex installation of database-dependent parallel software and maintaining large hardware resources needed for protein docking simulations. Docking problems submitted to GRAMM-X server are processed by a 320 processor Linux cluster. The server was extensively tested by benchmarking, several months of public use, and participation in the CAPRI server track. PMID:16845016
2016-06-08
server environment. While the college’s two Cisco blade -servers are located in separate buildings, these 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...college’s two Cisco blade -servers are located in separate buildings, these units now work as one unit. Critical databases and software packages are...server environment. While the college’s two Cisco blade -servers are located in separate buildings, these units now work as one unit. Critical
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report
2007-02-05
34* Created new SQL server database for "PC Configuration" web application. Added roles for security closed 4235 and posted application to production. "e Wrote...and ran SQL Server scripts to migrate production databases to new server . "e Created backup jobs for new SQL Server databases. "* Continued...second phase of the TENA demo. Extensive tasking was established and assigned. A TENA interface to EW Server was reaffirmed after some uncertainty about
Lawrence, Daphne
2009-03-01
Blade servers and virtualization can reduce infrastructure, maintenance, heating, electric, cooling and equipment costs. Blade server technology is evolving and some elements may become obsolete. There is very little interoperability between blades. Hospitals can virtualize 40 to 60 percent of their servers, and old servers can be reused for testing. Not all applications lend themselves to virtualization--especially those with high memory requirements. CIOs should engage their vendors in virtualization discussions.
Design and development of an IoT-based web application for an intelligent remote SCADA system
NASA Astrophysics Data System (ADS)
Kao, Kuang-Chi; Chieng, Wei-Hua; Jeng, Shyr-Long
2018-03-01
This paper presents a design of an intelligent remote electrical power supervisory control and data acquisition (SCADA) system based on the Internet of Things (IoT), with Internet Information Services (IIS) for setting up web servers, an ASP.NET model-view- controller (MVC) for establishing a remote electrical power monitoring and control system by using responsive web design (RWD), and a Microsoft SQL Server as the database. With the web browser connected to the Internet, the sensing data is sent to the client by using the TCP/IP protocol, which supports mobile devices with different screen sizes. The users can provide instructions immediately without being present to check the conditions, which considerably reduces labor and time costs. The developed system incorporates a remote measuring function by using a wireless sensor network and utilizes a visual interface to make the human-machine interface (HMI) more instinctive. Moreover, it contains an analog input/output and a basic digital input/output that can be applied to a motor driver and an inverter for integration with a remote SCADA system based on IoT, and thus achieve efficient power management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sands, P.D.
1998-08-01
Classified designs usually include lesser classified (including unclassified) components. An engineer working on such a design needs access to the various sub-designs at lower classification levels. For simplicity, the problem is presented with only two levels: high and low. If the low-classification component designs are stored in the high network, they become inaccessible to persons working on a low network. In order to keep the networks separate, the component designs may be duplicated in all networks, resulting in a synchronization problem. Alternatively, they may be stored in the low network and brought into the high network when needed. The lattermore » solution results in the use of sneaker-net (copying the files from the low system to a tape and carrying the tape to a high system) or a file transfer guard. This paper shows how an FTP Guard was constructed and implemented without degrading the security of the underlying B3 platform. The paper then shows how the guard can be extended to an FTP proxy server or an HTTP proxy server. The extension is accomplished by allowing the high-side user to select among items that already exist on the low-side. No high-side data can be directly compromised by the extension, but a mechanism must be developed to handle the low-bandwidth covert channel that would be introduced by the application.« less
A Scalability Model for ECS's Data Server
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.; Singhal, Mukesh
1998-01-01
This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.
Load Balancing in Distributed Web Caching: A Novel Clustering Approach
NASA Astrophysics Data System (ADS)
Tiwari, R.; Kumar, K.; Khan, G.
2010-11-01
The World Wide Web suffers from scaling and reliability problems due to overloaded and congested proxy servers. Caching at local proxy servers helps, but cannot satisfy more than a third to half of requests; more requests are still sent to original remote origin servers. In this paper we have developed an algorithm for Distributed Web Cache, which incorporates cooperation among proxy servers of one cluster. This algorithm uses Distributed Web Cache concepts along with static hierarchies with geographical based clusters of level one proxy server with dynamic mechanism of proxy server during the congestion of one cluster. Congestion and scalability problems are being dealt by clustering concept used in our approach. This results in higher hit ratio of caches, with lesser latency delay for requested pages. This algorithm also guarantees data consistency between the original server objects and the proxy cache objects.
On the optimal use of a slow server in two-stage queueing systems
NASA Astrophysics Data System (ADS)
Papachristos, Ioannis; Pandelis, Dimitrios G.
2017-07-01
We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.
Process evaluation distributed system
NASA Technical Reports Server (NTRS)
Moffatt, Christopher L. (Inventor)
2006-01-01
The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.
Nakrani, Sunil; Tovey, Craig
2007-12-01
An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.
Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil
2012-06-15
A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-12
... Commercial and Industrial Equipment: Proposed Determination of Computer Servers as a Covered Consumer Product... comments on the proposed determination that computer servers (servers) qualify as a covered product. DATES: The comment period for the proposed determination relating to servers published on July 12, 2013 (78...
ASPEN--A Web-Based Application for Managing Student Server Accounts
ERIC Educational Resources Information Center
Sandvig, J. Christopher
2004-01-01
The growth of the Internet has greatly increased the demand for server-side programming courses at colleges and universities. Students enrolled in such courses must be provided with server-based accounts that support the technologies that they are learning. The process of creating, managing and removing large numbers of student server accounts is…
The design of moral education website for college students based on ASP.NET
NASA Astrophysics Data System (ADS)
Sui, Chunling; Du, Ruiqing
2012-01-01
Moral education website offers an available solution to low transmission speed and small influence areas of traditional moral education. The aim of this paper is to illustrate the design of one moral education website and the advantages of using it to help moral teaching. The reason for moral education website was discussed at the beginning of this paper. Development tools were introduced. The system design was illustrated with module design and database design. How to access data in SQL Server database are discussed in details. Finally a conclusion was made based on the discussions in this paper.
How to securely replicate services
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth
1992-01-01
A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by n servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter k, at least k servers are correct and fewer than k servers are corrupt. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires fewer than k servers to be corrupt and that is live if at least k+b servers are correct, where b is the assumed maximum total number of corrupt servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service. The practicality of these schemes is illustrated through a discussion of several issues pertinent to their implementation and use, and their intended role in a secure version of the Isis system is also described.
Optimal Self-Tuning PID Controller Based on Low Power Consumption for a Server Fan Cooling System.
Lee, Chengming; Chen, Rongshun
2015-05-20
Recently, saving the cooling power in servers by controlling the fan speed has attracted considerable attention because of the increasing demand for high-density servers. This paper presents an optimal self-tuning proportional-integral-derivative (PID) controller, combining a PID neural network (PIDNN) with fan-power-based optimization in the transient-state temperature response in the time domain, for a server fan cooling system. Because the thermal model of the cooling system is nonlinear and complex, a server mockup system simulating a 1U rack server was constructed and a fan power model was created using a third-order nonlinear curve fit to determine the cooling power consumption by the fan speed control. PIDNN with a time domain criterion is used to tune all online and optimized PID gains. The proposed controller was validated through experiments of step response when the server operated from the low to high power state. The results show that up to 14% of a server's fan cooling power can be saved if the fan control permits a slight temperature response overshoot in the electronic components, which may provide a time-saving strategy for tuning the PID controller to control the server fan speed during low fan power consumption.
Performance of a distributed superscalar storage server
NASA Technical Reports Server (NTRS)
Finestead, Arlan; Yeager, Nancy
1993-01-01
The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.
Evolution of the Data Access Protocol in Response to Community Needs
NASA Astrophysics Data System (ADS)
Gallagher, J.; Caron, J. L.; Davis, E.; Fulker, D.; Heimbigner, D.; Holloway, D.; Howe, B.; Moe, S.; Potter, N.
2012-12-01
Under the aegis of the OPULS (OPeNDAP-Unidata Linked Servers) Project, funded by NOAA, version 2 of OPeNDAP's Data Access Protocol (DAP2) is being updated to version 4. DAP4 is the first major upgrade in almost two decades and will embody three main areas of advancement. First, the data-model extensions developed by the OPULS team focus on three areas: Better support for coverages, access to HDF5 files and access to relational databases. DAP2 support for coverages (defined as a sampled functions) was limited to simple rectangular coverages that work well for (some) model outputs and processed satellite data but that cannot represent trajectories or satellite swath data, for example. We have extended the coverage concept in DAP4 to remove these limitations. These changes are informed by work at Unidata on the Common Data Model and also by the OGC's abstract coverages specification. In a similar vein, we have extended DAP2's support for relations by including the concept of foreign keys, so that tables can be explicitly related to one another. Second, the web interfaces - web services - that provides access to data using via DAP will be more clearly defined and use other (, orthogonal), standards where they are appropriate. An important case is the XML interface, which provides a cleaner way to build other response media types such as JSON and RDF (for metadata) and to build support for Atom, thus simplify the integration of DAP servers with tools that support OpenSearch. Input from the ESIP federation and work performed with IOOS have informed our choices here. Last, DAP4-compliant servers will support richer data-processing capabilities than DAP2, enabling a wider array of server functions that manipulate data before returning values. Two projects currently are exploring just what can be done even with DAP2's server-function model: The MIIC project at LARC and OPULS itself (with work performed at the University of Washington). Both projects have demonstrated that server functions can be used to perform operations on large volumes of data and return results that are far smaller than would be required to achieve the same outcomes via client-side processing. We are using information from these efforts to inform the design of server functions in DAP4. Each of the three areas of DAP4 advancement is being guided by input from a number of community members, including an OPULS Advisory Committee.
ERIC Educational Resources Information Center
Gonzalez, Josue M.
1995-01-01
Describes the design and installation of an Internet gopher server to support classroom instruction and professional development projects in a graduate college of education. Topics include use by administrators, selecting the most appropriate technology, hardware and software selection, and informational resources of the gopher. (Author/LRW)
A Network Design Architecture for Distribution of Generic Scene Graphs
1999-09-01
with UML. Addison Wesley. Deitel, H. and Deitel, P. 1994. C++ How to Program . Prentice Hall. Deitel, H. and Deitel, P. 1998. JAVA How ... to . Program . Prentice.Hall. Eckel, B. 1998. Thinking in JAVA. Prentice Hall. 141 Edwards, J. 1997. 3-Tier Client/Server At Work. John
76 FR 61717 - Government-Owned Inventions; Availability for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-05
... computer science based technology that may provide the capability of detecting untoward events such as... is comprised of a dedicated computer server that executes specially designed software with input data... computer assisted clinical ordering. J Biomed Inform. 2003 Feb-Apr;36(1-2):4-22. [PMID 14552843...
Aviation Environmental Design Tool (AEDT) : Version 2c service Pack 1 : installation guide.
DOT National Transportation Integrated Search
2016-12-01
This document provides detailed instructions on how to install and run AEDT 2c Service Pack 1 (SP1). It is important to follow the installation instructions in the order listed below, as Microsoft SQL Server 2008 R2 is a prerequisite for AEDT. Instal...
MATREX: A Unifying Modeling and Simulation Architecture for Live-Virtual-Constructive Applications
2007-05-23
Deployment Systems Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems...CMS2 – Comprehensive Munitions & Sensor Server • CSAT – C4ISR Static Analysis Tool • C4ISR – Command & Control, Communications, Computers
MOOsburg: Multi-User Domain Support for a Community Network.
ERIC Educational Resources Information Center
Carroll, John M.; Rosson, Mary Beth; Isenhour, Philip L.; Van Metre, Christina; Schafer, Wendy A.; Ganoe, Craig H.
2001-01-01
Explains MOOsburg, a community-oriented MOO that models the geography of the town of Blacksburg, Virginia and is designed to be used by local residents. Highlights include the software architecture; client-server communication; spatial database; user interface; interaction; map-based navigation; application development; and future plans. (LRW)
Analyzing Web Server Logs to Improve a Site's Usage. The Systems Librarian
ERIC Educational Resources Information Center
Breeding, Marshall
2005-01-01
This column describes ways to streamline and optimize how a Web site works in order to improve both its usability and its visibility. The author explains how to analyze logs and other system data to measure the effectiveness of the Web site design and search engine.
NASA Astrophysics Data System (ADS)
Roach, Colin; Carlsson, Johan; Cary, John R.; Alexander, David A.
2002-11-01
The National Transport Code Collaboration (NTCC) has developed an array of software, including a data client/server. The data server, which is written in C++, serves local data (in the ITER Profile Database format) as well as remote data (by accessing one or several MDS+ servers). The client, a web-invocable Java applet, provides a uniform, intuitive, user-friendly, graphical interface to the data server. The uniformity of the interface relieves the user from the trouble of mastering the differences between different data formats and lets him/her focus on the essentials: plotting and viewing the data. The user runs the client by visiting a web page using any Java capable Web browser. The client is automatically downloaded and run by the browser. A reference to the data server is then retrieved via the standard Web protocol (HTTP). The communication between the client and the server is then handled by the mature, industry-standard CORBA middleware. CORBA has bindings for all common languages and many high-quality implementations are available (both Open Source and commercial). The NTCC data server has been installed at the ITPA International Multi-tokamak Confinement Profile Database, which is hosted by the UKAEA at Culham Science Centre. The installation of the data server is protected by an Internet firewall. To make it accessible to clients outside the firewall some modifications of the server were required. The working version of the ITPA confinement profile database is not open to the public. Authentification of legitimate users is done utilizing built-in Java security features to demand a password to download the client. We present an overview of the NTCC data client/server and some details of how the CORBA firewall-traversal issues were resolved and how the user authentification is implemented.
LiveBench-1: continuous benchmarking of protein structure prediction servers.
Bujnicki, J M; Elofsson, A; Fischer, D; Rychlewski, L
2001-02-01
We present a novel, continuous approach aimed at the large-scale assessment of the performance of available fold-recognition servers. Six popular servers were investigated: PDB-Blast, FFAS, T98-lib, GenTHREADER, 3D-PSSM, and INBGU. The assessment was conducted using as prediction targets a large number of selected protein structures released from October 1999 to April 2000. A target was selected if its sequence showed no significant similarity to any of the proteins previously available in the structural database. Overall, the servers were able to produce structurally similar models for one-half of the targets, but significantly accurate sequence-structure alignments were produced for only one-third of the targets. We further classified the targets into two sets: easy and hard. We found that all servers were able to find the correct answer for the vast majority of the easy targets if a structurally similar fold was present in the server's fold libraries. However, among the hard targets--where standard methods such as PSI-BLAST fail--the most sensitive fold-recognition servers were able to produce similar models for only 40% of the cases, half of which had a significantly accurate sequence-structure alignment. Among the hard targets, the presence of updated libraries appeared to be less critical for the ranking. An "ideally combined consensus" prediction, where the results of all servers are considered, would increase the percentage of correct assignments by 50%. Each server had a number of cases with a correct assignment, where the assignments of all the other servers were wrong. This emphasizes the benefits of considering more than one server in difficult prediction tasks. The LiveBench program (http://BioInfo.PL/LiveBench) is being continued, and all interested developers are cordially invited to join.
Lee, Tae-Kyong; Chung, Hea-Jung; Park, Hye-Kyung; Lee, Eun-Ju; Nam, Hye-Seon; Jung, Soon-Im; Cho, Jee-Ye; Lee, Jin-Hee; Kim, Gon; Kim, Min-Chan
2008-01-01
A diet habit, which is developed in childhood, lasts for a life time. In this sense, nutrition education and early exposure to healthy menus in childhood is important. Children these days have easy access to the internet. Thus, a web-based nutrition education program for children is an effective tool for nutrition education of children. This site provides the material of the nutrition education for children with characters which are personified nutrients. The 151 menus are stored in the site together with video script of the cooking process. The menus are classified by the criteria based on age, menu type and the ethnic origin of the menu. The site provides a search function. There are three kinds of search conditions which are key words, menu type and "between" expression of nutrients such as calorie and other nutrients. The site is developed with the operating system Windows 2003 Server, the web server ZEUS 5, development language JSP, and database management system Oracle 10 g. PMID:20126375
Planetary Data Systems (PDS) Imaging Node Atlas II
NASA Technical Reports Server (NTRS)
Stanboli, Alice; McAuley, James M.
2013-01-01
The Planetary Image Atlas (PIA) is a Rich Internet Application (RIA) that serves planetary imaging data to the science community and the general public. PIA also utilizes the USGS Unified Planetary Coordinate system (UPC) and the on-Mars map server. The Atlas was designed to provide the ability to search and filter through greater than 8 million planetary image files. This software is a three-tier Web application that contains a search engine backend (MySQL, JAVA), Web service interface (SOAP) between server and client, and a GWT Google Maps API client front end. This application allows for the search, retrieval, and download of planetary images and associated meta-data from the following missions: 2001 Mars Odyssey, Cassini, Galileo, LCROSS, Lunar Reconnaissance Orbiter, Mars Exploration Rover, Mars Express, Magellan, Mars Global Surveyor, Mars Pathfinder, Mars Reconnaissance Orbiter, MESSENGER, Phoe nix, Viking Lander, Viking Orbiter, and Voyager. The Atlas utilizes the UPC to translate mission-specific coordinate systems into a unified coordinate system, allowing the end user to query across missions of similar targets. If desired, the end user can also use a mission-specific view of the Atlas. The mission-specific views rely on the same code base. This application is a major improvement over the initial version of the Planetary Image Atlas. It is a multi-mission search engine. This tool includes both basic and advanced search capabilities, providing a product search tool to interrogate the collection of planetary images. This tool lets the end user query information about each image, and ignores the data that the user has no interest in. Users can reduce the number of images to look at by defining an area of interest with latitude and longitude ranges.
Ambroggio, Xavier I; Dommer, Jennifer; Gopalan, Vivek; Dunham, Eleca J; Taubenberger, Jeffery K; Hurt, Darrell E
2013-06-18
Influenza A viruses possess RNA genomes that mutate frequently in response to immune pressures. The mutations in the hemagglutinin genes are particularly significant, as the hemagglutinin proteins mediate attachment and fusion to host cells, thereby influencing viral pathogenicity and species specificity. Large-scale influenza A genome sequencing efforts have been ongoing to understand past epidemics and pandemics and anticipate future outbreaks. Sequencing efforts thus far have generated nearly 9,000 distinct hemagglutinin amino acid sequences. Comparative models for all publicly available influenza A hemagglutinin protein sequences (8,769 to date) were generated using the Rosetta modeling suite. The C-alpha root mean square deviations between a randomly chosen test set of models and their crystallographic templates were less than 2 Å, suggesting that the modeling protocols yielded high-quality results. The models were compiled into an online resource, the Hemagglutinin Structure Prediction (HASP) server. The HASP server was designed as a scientific tool for researchers to visualize hemagglutinin protein sequences of interest in a three-dimensional context. With a built-in molecular viewer, hemagglutinin models can be compared side-by-side and navigated by a corresponding sequence alignment. The models and alignments can be downloaded for offline use and further analysis. The modeling protocols used in the HASP server scale well for large amounts of sequences and will keep pace with expanded sequencing efforts. The conservative approach to modeling and the intuitive search and visualization interfaces allow researchers to quickly analyze hemagglutinin sequences of interest in the context of the most highly related experimental structures, and allow them to directly compare hemagglutinin sequences to each other simultaneously in their two- and three-dimensional contexts. The models and methodology have shown utility in current research efforts and the ongoing aim of the HASP server is to continue to accelerate influenza A research and have a positive impact on global public health.
2014-01-01
Background Osteopontin (Eta, secreted sialoprotein 1, opn) is secreted from different cell types including cancer cells. Three splice variant forms namely osteopontin-a, osteopontin-b and osteopontin-c have been identified. The main astonishing feature is that osteopontin-c is found to be elevated in almost all types of cancer cells. This was the vital point to consider it for sequence analysis and structure predictions which provide ample chances for prognostic, therapeutic and preventive cancer research. Methods Osteopontin-c gene sequence was determined from Breast Cancer sample and was translated to protein sequence. It was then analyzed using various software and web tools for binding pockets, docking and druggability analysis. Due to the lack of homological templates, tertiary structure was predicted using ab-initio method server – I-TASSER and was evaluated after refinement using web tools. Refined structure was compared with known bone sialoprotein electron microscopic structure and docked with CD44 for binding analysis and binding pockets were identified for drug designing. Results Signal sequence of about sixteen amino acid residues was identified using signal sequence prediction servers. Due to the absence of known structures of similar proteins, three dimensional structure of osteopontin-c was predicted using I-TASSER server. The predicted structure was refined with the help of SUMMA server and was validated using SAVES server. Molecular dynamic analysis was carried out using GROMACS software. The final model was built and was used for docking with CD44. Druggable pockets were identified using pocket energies. Conclusions The tertiary structure of osteopontin-c was predicted successfully using the ab-initio method and the predictions showed that osteopontin-c is of fibrous nature comparable to firbronectin. Docking studies showed the significant similarities of QSAET motif in the interaction of CD44 and osteopontins between the normal and splice variant forms of osteopontins and binding pockets analyses revealed several pockets which paved the way to the identification of a druggable pocket. PMID:24401206
iCOSSY: An Online Tool for Context-Specific Subnetwork Discovery from Gene Expression Data
Saha, Ashis; Jeon, Minji; Tan, Aik Choon; Kang, Jaewoo
2015-01-01
Pathway analyses help reveal underlying molecular mechanisms of complex biological phenotypes. Biologists tend to perform multiple pathway analyses on the same dataset, as there is no single answer. It is often inefficient for them to implement and/or install all the algorithms by themselves. Online tools can help the community in this regard. Here we present an online gene expression analytical tool called iCOSSY which implements a novel pathway-based COntext-specific Subnetwork discoverY (COSSY) algorithm. iCOSSY also includes a few modifications of COSSY to increase its reliability and interpretability. Users can upload their gene expression datasets, and discover important subnetworks of closely interacting molecules to differentiate between two phenotypes (context). They can also interactively visualize the resulting subnetworks. iCOSSY is a web server that finds subnetworks that are differentially expressed in two phenotypes. Users can visualize the subnetworks to understand the biology of the difference. PMID:26147457
PINTA: a web server for network-based gene prioritization from expression data
Nitsch, Daniela; Tranchevent, Léon-Charles; Gonçalves, Joana P.; Vogt, Josef Korbinian; Madeira, Sara C.; Moreau, Yves
2011-01-01
PINTA (available at http://www.esat.kuleuven.be/pinta/; this web site is free and open to all users and there is no login requirement) is a web resource for the prioritization of candidate genes based on the differential expression of their neighborhood in a genome-wide protein–protein interaction network. Our strategy is meant for biological and medical researchers aiming at identifying novel disease genes using disease specific expression data. PINTA supports both candidate gene prioritization (starting from a user defined set of candidate genes) as well as genome-wide gene prioritization and is available for five species (human, mouse, rat, worm and yeast). As input data, PINTA only requires disease specific expression data, whereas various platforms (e.g. Affymetrix) are supported. As a result, PINTA computes a gene ranking and presents the results as a table that can easily be browsed and downloaded by the user. PMID:21602267
A Next-Generation Apparatus for Lithium Optical Lattice Experiments
NASA Astrophysics Data System (ADS)
Keshet, Aviv
Quantum simulation is emerging as an ambitious and active subfield of atomic physics. This thesis describes progress towards the goal of simulating condensed matter systems, in particular the physics of the Fermi-Hubbard model, using ultracold Lithium atoms in an optical lattice. A major goal of the quantum simulation program is to observe phase transitions of the Hubbard model, into Neal antiferromagnetic phases and d-wave superfluid phases. Phase transitions are generally accompanied by a change in an underlying correlation in a physical system. Such correlations may be most amenable to probing by looking at fluctuations in the system. Experimental techniques for probing density and magnetization fluctuations in a variety of atomic Fermi systems are developed. The suppression of density fluctuations (or atom "shot noise") in an ideal degenerate Fermi gas is observed by absorption imaging of time-of-flight expanded clouds. In-trap measurements of density and magnetization fluctuations are not easy to probe with absorption imaging, due to their extremely high attenuation. A method to probe these fluctuations based on speckle patterns, caused by fluctuations in the index of refraction for a detuned illumination beam, is developed and applied first to weakly interacting and then to strongly interacting in-trap gases. Fluctuation probes such as these will be a crucial tool in future quantum simulation of condensed matter systems. The quantum simulation experiments that we want to perform require a complex sequence of precisely timed computer controlled events. A distributed GUI-based control system designed with such experiments in mind, The Cicero Word Generator, is described. The system makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature allows this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using an FPGA-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100ns achieved over effectively arbitrary sequence lengths. Experimental set-ups for producing, manipulating, and probing ultracold atomic gases can be quite complicated. To move forward with a quantum simulation program, it is necessary to have an apparatus that operates with a reliability that is not easily achieved in the face of this complexity. The design of a new apparatus is discussed. This Sodium-Lithium ultracold gas production machine has been engineered to incorporate as much experimental experience as possible to enhance its reliability. Particular attention has been paid to maximizing optical access and the utilization of this optical access, controlling the ambient temperature of the experiment, achieving a high vacuum, and simplifying subsystems where possible. The apparatus is now on the verge of producing degenerate gases, and should serve as a stable platform on which to perform future lattice quantum simulation experiments. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
Protection of Location Privacy Based on Distributed Collaborative Recommendations
Wang, Peng; Yang, Jing; Zhang, Jian-Pei
2016-01-01
In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users’ location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users’ location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users’ location information profiles and used generalization and encryption to ensure the safety of the user’s location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user’s location privacy. PMID:27649308
Protection of Location Privacy Based on Distributed Collaborative Recommendations.
Wang, Peng; Yang, Jing; Zhang, Jian-Pei
2016-01-01
In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users' location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users' location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users' location information profiles and used generalization and encryption to ensure the safety of the user's location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user's location privacy.
Ambrosini, Giovanna; Dreos, René; Kumar, Sunil; Bucher, Philipp
2016-11-18
ChIP-seq and related high-throughput chromatin profilig assays generate ever increasing volumes of highly valuable biological data. To make sense out of it, biologists need versatile, efficient and user-friendly tools for access, visualization and itegrative analysis of such data. Here we present the ChIP-Seq command line tools and web server, implementing basic algorithms for ChIP-seq data analysis starting with a read alignment file. The tools are optimized for memory-efficiency and speed thus allowing for processing of large data volumes on inexpensive hardware. The web interface provides access to a large database of public data. The ChIP-Seq tools have a modular and interoperable design in that the output from one application can serve as input to another one. Complex and innovative tasks can thus be achieved by running several tools in a cascade. The various ChIP-Seq command line tools and web services either complement or compare favorably to related bioinformatics resources in terms of computational efficiency, ease of access to public data and interoperability with other web-based tools. The ChIP-Seq server is accessible at http://ccg.vital-it.ch/chipseq/ .
Ueki, Shigeharu; Kayaba, Hiroyuki; Tomita, Noriko; Kobayashi, Noriko; Takahashi, Tomoe; Obara, Toshikage; Takeda, Masahide; Moritoki, Yuki; Itoga, Masamichi; Ito, Wataru; Ohsaga, Atsushi; Kondoh, Katsuyuki; Chihara, Junichi
2011-04-01
The active involvement of hospital laboratory in surveillance is crucial to the success of nosocomial infection control. The recent dramatic increase of antimicrobial-resistant organisms and their spread into the community suggest that the infection control strategy of independent medical institutions is insufficient. To share the clinical data and surveillance in our local medical region, we developed a microbiology data warehouse for networking hospital laboratories in Akita prefecture. This system, named Akita-ReNICS, is an easy-to-use information management system designed to compare, track, and report the occurrence of antimicrobial-resistant organisms. Participating laboratories routinely transfer their coded and formatted microbiology data to ReNICS server located at Akita University Hospital from their health care system's clinical computer applications over the internet. We established the system to automate the statistical processes, so that the participants can access the server to monitor graphical data in the manner they prefer, using their own computer's browser. Furthermore, our system also provides the documents server, microbiology and antimicrobiotic database, and space for long-term storage of microbiological samples. Akita-ReNICS could be a next generation network for quality improvement of infection control.
A collaborative platform for consensus sessions in pathology over Internet.
Zapletal, Eric; Le Bozec, Christel; Degoulet, Patrice; Jaulent, Marie-Christine
2003-01-01
The design of valid databases in pathology faces the problem of diagnostic disagreement between pathologists. Organizing consensus sessions between experts to reduce the variability is a difficult task. The TRIDEM platform addresses the issue to organize consensus sessions in pathology over the Internet. In this paper, we present the basis to achieve such collaborative platform. On the one hand, the platform integrates the functionalities of the IDEM consensus module that alleviates the consensus task by presenting to pathologists preliminary computed consensus through ergonomic interfaces (automatic step). On the other hand, a set of lightweight interaction tools such as vocal annotations are implemented to ease the communication between experts as they discuss a case (interactive step). The architecture of the TRIDEM platform is based on a Java-Server-Page web server that communicate with the ObjectStore PSE/PRO database used for the object storage. The HTML pages generated by the web server run Java applets to perform the different steps (automatic and interactive) of the consensus. The current limitations of the platform is to only handle a synchronous process. Moreover, improvements like re-writing the consensus workflow with a protocol such as BPML are already forecast.
A Web-Based Information System for Field Data Management
NASA Astrophysics Data System (ADS)
Weng, Y. H.; Sun, F. S.
2014-12-01
A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.
Lee, Chi-Ching; Chen, Yi-Ping Phoebe; Yao, Tzu-Jung; Ma, Cheng-Yu; Lo, Wei-Cheng; Lyu, Ping-Chiang; Tang, Chuan Yi
2013-04-10
Sequencing of microbial genomes is important because of microbial-carrying antibiotic and pathogenetic activities. However, even with the help of new assembling software, finishing a whole genome is a time-consuming task. In most bacteria, pathogenetic or antibiotic genes are carried in genomic islands. Therefore, a quick genomic island (GI) prediction method is useful for ongoing sequencing genomes. In this work, we built a Web server called GI-POP (http://gipop.life.nthu.edu.tw) which integrates a sequence assembling tool, a functional annotation pipeline, and a high-performance GI predicting module, in a support vector machine (SVM)-based method called genomic island genomic profile scanning (GI-GPS). The draft genomes of the ongoing genome projects in contigs or scaffolds can be submitted to our Web server, and it provides the functional annotation and highly probable GI-predicting results. GI-POP is a comprehensive annotation Web server designed for ongoing genome project analysis. Researchers can perform annotation and obtain pre-analytic information include possible GIs, coding/non-coding sequences and functional analysis from their draft genomes. This pre-analytic system can provide useful information for finishing a genome sequencing project. Copyright © 2012 Elsevier B.V. All rights reserved.
TreeVector: scalable, interactive, phylogenetic trees for the web.
Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian
2010-01-28
Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.
MISTIC2: comprehensive server to study coevolution in protein families.
Colell, Eloy A; Iserte, Javier A; Simonetti, Franco L; Marino-Buslje, Cristina
2018-06-14
Correlated mutations between residue pairs in evolutionarily related proteins arise from constraints needed to maintain a functional and stable protein. Identifying these inter-related positions narrows down the search for structurally or functionally important sites. MISTIC is a server designed to assist users to calculate covariation in protein families and provide them with an interactive tool to visualize the results. Here, we present MISTIC2, an update to the previous server, that allows to calculate four covariation methods (MIp, mfDCA, plmDCA and gaussianDCA). The results visualization framework has been reworked for improved performance, compatibility and user experience. It includes a circos representation of the information contained in the alignment, an interactive covariation network, a 3D structure viewer and a sequence logo. Others components provide additional information such as residue annotations, a roc curve for assessing contact prediction, data tables and different ways of filtering the data and exporting figures. Comparison of different methods is easily done and scores combination is also possible. A newly implemented web service allows users to access MISTIC2 programmatically using an API to calculate covariation and retrieve results. MISTIC2 is available at: https://mistic2.leloir.org.ar.
Moretti, Rocco; Lyskov, Sergey; Das, Rhiju; Meiler, Jens; Gray, Jeffrey J
2018-01-01
The Rosetta molecular modeling software package provides a large number of experimentally validated tools for modeling and designing proteins, nucleic acids, and other biopolymers, with new protocols being added continually. While freely available to academic users, external usage is limited by the need for expertise in the Unix command line environment. To make Rosetta protocols available to a wider audience, we previously created a web server called Rosetta Online Server that Includes Everyone (ROSIE), which provides a common environment for hosting web-accessible Rosetta protocols. Here we describe a simplification of the ROSIE protocol specification format, one that permits easier implementation of Rosetta protocols. Whereas the previous format required creating multiple separate files in different locations, the new format allows specification of the protocol in a single file. This new, simplified protocol specification has more than doubled the number of Rosetta protocols available under ROSIE. These new applications include pK a determination, lipid accessibility calculation, ribonucleic acid redesign, protein-protein docking, protein-small molecule docking, symmetric docking, antibody docking, cyclic toxin docking, critical binding peptide determination, and mapping small molecule binding sites. ROSIE is freely available to academic users at http://rosie.rosettacommons.org. © 2017 The Protein Society.
Group-oriented coordination models for distributed client-server computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Hughes, Craig S.
1994-01-01
This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.
National Medical Terminology Server in Korea
NASA Astrophysics Data System (ADS)
Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee
Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.
CIVET: Continuous Integration, Verification, Enhancement, and Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alger, Brian; Gaston, Derek R.; Permann, Cody J
A Git server (GitHub, GitLab, BitBucket) sends event notifications to the Civet server. These are either a " Pull Request" or a "Push" notification. Civet then checks the database to determine what tests need to be run and marks them as ready to run. Civet clients, running on dedicated machines, query the server for available jobs that are ready to run. When a client gets a job it executes the scripts attached to the job and report back to the server the output and exit status. When the client updates the server, the server will also update the Git servermore » with the result of the job, as well as updating the main web page.« less
The UK Human Genome Mapping Project online computing service.
Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W
1992-04-01
This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.
Enhancing the Remote Variable Operations in NPSS/CCDK
NASA Technical Reports Server (NTRS)
Sang, Janche; Follen, Gregory; Kim, Chan; Lopez, Isaac; Townsend, Scott
2001-01-01
Many scientific applications in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase the code reusability. The remote variable scheme provided in NPSS/CCDK helps programmers easily migrate the Fortran codes towards a client-server platform. This scheme gives the client the capability of accessing the variables at the server site. In this paper, we review and enhance the remote variable scheme by using the operator overloading features in C++. The enhancement enables NPSS programmers to use remote variables in much the same way as traditional variables. The remote variable scheme adopts the lazy update approach and the prefetch method. The design strategies and implementation techniques are described in details. Preliminary performance evaluation shows that communication overhead can be greatly reduced.
NIAS-Server: Neighbors Influence of Amino acids and Secondary Structures in Proteins.
Borguesan, Bruno; Inostroza-Ponta, Mario; Dorn, Márcio
2017-03-01
The exponential growth in the number of experimentally determined three-dimensional protein structures provide a new and relevant knowledge about the conformation of amino acids in proteins. Only a few of probability densities of amino acids are publicly available for use in structure validation and prediction methods. NIAS (Neighbors Influence of Amino acids and Secondary structures) is a web-based tool used to extract information about conformational preferences of amino acid residues and secondary structures in experimental-determined protein templates. This information is useful, for example, to characterize folds and local motifs in proteins, molecular folding, and can help the solution of complex problems such as protein structure prediction, protein design, among others. The NIAS-Server and supplementary data are available at http://sbcb.inf.ufrgs.br/nias .
Development of a graphical user interface for the global land information system (GLIS)
Alstad, Susan R.; Jackson, David A.
1993-01-01
The process of developing a Motif Graphical User Interface for the Global Land Information System (GLIS) involved incorporating user requirements, in-house visual and functional design requirements, and Open Software Foundation (OSF) Motif style guide standards. Motif user interface windows have been developed using the software to support Motif window functions war written using the C programming language. The GLIS architecture was modified to support multiple servers and remote handlers running the X Window System by forming a network of servers and handlers connected by TCP/IP communications. In April 1993, prior to release the GLIS graphical user interface and system architecture modifications were test by developers and users located at the EROS Data Center and 11 beta test sites across the country.
An efficient annotation and gene-expression derivation tool for Illumina Solexa datasets.
Hosseini, Parsa; Tremblay, Arianne; Matthews, Benjamin F; Alkharouf, Nadim W
2010-07-02
The data produced by an Illumina flow cell with all eight lanes occupied, produces well over a terabyte worth of images with gigabytes of reads following sequence alignment. The ability to translate such reads into meaningful annotation is therefore of great concern and importance. Very easily, one can get flooded with such a great volume of textual, unannotated data irrespective of read quality or size. CASAVA, a optional analysis tool for Illumina sequencing experiments, enables the ability to understand INDEL detection, SNP information, and allele calling. To not only extract from such analysis, a measure of gene expression in the form of tag-counts, but furthermore to annotate such reads is therefore of significant value. We developed TASE (Tag counting and Analysis of Solexa Experiments), a rapid tag-counting and annotation software tool specifically designed for Illumina CASAVA sequencing datasets. Developed in Java and deployed using jTDS JDBC driver and a SQL Server backend, TASE provides an extremely fast means of calculating gene expression through tag-counts while annotating sequenced reads with the gene's presumed function, from any given CASAVA-build. Such a build is generated for both DNA and RNA sequencing. Analysis is broken into two distinct components: DNA sequence or read concatenation, followed by tag-counting and annotation. The end result produces output containing the homology-based functional annotation and respective gene expression measure signifying how many times sequenced reads were found within the genomic ranges of functional annotations. TASE is a powerful tool to facilitate the process of annotating a given Illumina Solexa sequencing dataset. Our results indicate that both homology-based annotation and tag-count analysis are achieved in very efficient times, providing researchers to delve deep in a given CASAVA-build and maximize information extraction from a sequencing dataset. TASE is specially designed to translate sequence data in a CASAVA-build into functional annotations while producing corresponding gene expression measurements. Achieving such analysis is executed in an ultrafast and highly efficient manner, whether the analysis be a single-read or paired-end sequencing experiment. TASE is a user-friendly and freely available application, allowing rapid analysis and annotation of any given Illumina Solexa sequencing dataset with ease.
miRanalyzer: a microRNA detection and analysis tool for next-generation sequencing experiments.
Hackenberg, Michael; Sturm, Martin; Langenberger, David; Falcón-Pérez, Juan Manuel; Aransay, Ana M
2009-07-01
Next-generation sequencing allows now the sequencing of small RNA molecules and the estimation of their expression levels. Consequently, there will be a high demand of bioinformatics tools to cope with the several gigabytes of sequence data generated in each single deep-sequencing experiment. Given this scene, we developed miRanalyzer, a web server tool for the analysis of deep-sequencing experiments for small RNAs. The web server tool requires a simple input file containing a list of unique reads and its copy numbers (expression levels). Using these data, miRanalyzer (i) detects all known microRNA sequences annotated in miRBase, (ii) finds all perfect matches against other libraries of transcribed sequences and (iii) predicts new microRNAs. The prediction of new microRNAs is an especially important point as there are many species with very few known microRNAs. Therefore, we implemented a highly accurate machine learning algorithm for the prediction of new microRNAs that reaches AUC values of 97.9% and recall values of up to 75% on unseen data. The web tool summarizes all the described steps in a single output page, which provides a comprehensive overview of the analysis, adding links to more detailed output pages for each analysis module. miRanalyzer is available at http://web.bioinformatics.cicbiogune.es/microRNA/.