Public library computer training for older adults to access high-quality Internet health information
Xie, Bo; Bugg, Julie M.
2010-01-01
An innovative experiment to develop and evaluate a public library computer training program to teach older adults to access and use high-quality Internet health information involved a productive collaboration among public libraries, the National Institute on Aging and the National Library of Medicine of the National Institutes of Health (NIH), and a Library and Information Science (LIS) academic program at a state university. One hundred and thirty-one older adults aged 54–89 participated in the study between September 2007 and July 2008. Key findings include: a) participants had overwhelmingly positive perceptions of the training program; b) after learning about two NIH websites (http://nihseniorhealth.gov and http://medlineplus.gov) from the training, many participants started using these online resources to find high quality health and medical information and, further, to guide their decision-making regarding a health- or medically-related matter; and c) computer anxiety significantly decreased (p < .001) while computer interest and efficacy significantly increased (p = .001 and p < .001, respectively) from pre- to post-training, suggesting statistically significant improvements in computer attitudes between pre- and post-training. The findings have implications for public libraries, LIS academic programs, and other organizations interested in providing similar programs in their communities. PMID:20161649
NASA Technical Reports Server (NTRS)
Lawson, Charles L.; Krogh, Fred; Van Snyder, W.; Oken, Carol A.; Mccreary, Faith A.; Lieske, Jay H.; Perrine, Jack; Coffin, Ralph S.; Wayne, Warren J.
1994-01-01
MATH77 is high-quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for basic computational processes of science and engineering. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. MATH77 release 4.0 subroutine library designed to be usable on any computer system supporting full ANSI standard FORTRAN 77 language.
Building high-quality assay libraries for targeted analysis of SWATH MS data.
Schubert, Olga T; Gillet, Ludovic C; Collins, Ben C; Navarro, Pedro; Rosenberger, George; Wolski, Witold E; Lam, Henry; Amodei, Dario; Mallick, Parag; MacLean, Brendan; Aebersold, Ruedi
2015-03-01
Targeted proteomics by selected/multiple reaction monitoring (S/MRM) or, on a larger scale, by SWATH (sequential window acquisition of all theoretical spectra) MS (mass spectrometry) typically relies on spectral reference libraries for peptide identification. Quality and coverage of these libraries are therefore of crucial importance for the performance of the methods. Here we present a detailed protocol that has been successfully used to build high-quality, extensive reference libraries supporting targeted proteomics by SWATH MS. We describe each step of the process, including data acquisition by discovery proteomics, assertion of peptide-spectrum matches (PSMs), generation of consensus spectra and compilation of MS coordinates that uniquely define each targeted peptide. Crucial steps such as false discovery rate (FDR) control, retention time normalization and handling of post-translationally modified peptides are detailed. Finally, we show how to use the library to extract SWATH data with the open-source software Skyline. The protocol takes 2-3 d to complete, depending on the extent of the library and the computational resources available.
ERIC Educational Resources Information Center
Goodgion, Laurel; And Others
1986-01-01
Eight articles in special supplement to "Library Journal" and "School Library Journal" cover a computer program called "Byte into Books"; microcomputers and the small library; creating databases with students; online searching with a microcomputer; quality automation software; Meckler Publishing Company's…
SOA-based digital library services and composition in biomedical applications.
Zhao, Xia; Liu, Enjie; Clapworthy, Gordon J; Viceconti, Marco; Testi, Debora
2012-06-01
Carefully collected, high-quality data are crucial in biomedical visualization, and it is important that the user community has ready access to both this data and the high-performance computing resources needed by the complex, computational algorithms that will process it. Biological researchers generally require data, tools and algorithms from multiple providers to achieve their goals. This paper illustrates our response to the problems that result from this. The Living Human Digital Library (LHDL) project presented in this paper has taken advantage of Web Services to build a biomedical digital library infrastructure that allows clinicians and researchers not only to preserve, trace and share data resources, but also to collaborate at the data-processing level. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Earth science photographs from the U.S. Geological Survey Library
McGregor, Joseph K.; Abston, Carl C.
1995-01-01
This CD-ROM set contains 1,500 scanned photographs from the U.S. Geological Survey Library for use as a photographic glossary of elementary geologic terms. Scholars are encouraged to copy these public domain images into their reports or databases to enhance their presentations. High-quality prints and (or) slides are available upon request from the library. This CD-ROM was produced in accordance with the ISO 9660 standard; however, it is intended for use on DOS-based computer systems only.
Developing Crash-Resistant Electronic Services.
ERIC Educational Resources Information Center
Almquist, Arne J.
1997-01-01
Libraries' dependence on computers can lead to frustrations for patrons and staff during downtime caused by computer system failures. Advice for reducing the number of crashes is provided, focusing on improved training for systems staff, better management of library systems, and the development of computer systems using quality components which…
Assessing the Quality of Academic Libraries on the Web: The Development and Testing of Criteria.
ERIC Educational Resources Information Center
Chao, Hungyune
2002-01-01
This study develops and tests an instrument useful for evaluating the quality of academic library Web sites. Discusses criteria for print materials and human-computer interfaces; user-based perspectives; the use of factor analysis; a survey of library experts; testing reliability through analysis of variance; and regression models. (Contains 53…
Evaluation of School Library Media Centers: Demonstrating Quality.
ERIC Educational Resources Information Center
Everhart, Nancy
2003-01-01
Discusses ways to evaluate school library media programs and how to demonstrate quality. Topics include how principals evaluate programs; sources of evaluative data; national, state, and local instruments; surveys and interviews; Colorado benchmarks; evaluating the use of electronic resources; and computer reporting options. (LRW)
Computers, Education and the Library at The Bronx High School of Science.
ERIC Educational Resources Information Center
Nachbar, Sondra; Sussman, Valerie
1988-01-01
Describes the services and programs offered by the library at The Bronx High School of Science. Topics discussed include the library collection; a basic library skills mini-course for freshmen and incoming sophomores; current uses of the library's computer system; and plans to automate the library's card catalog and circulation records.…
Automated design of degenerate codon libraries.
Mena, Marco A; Daugherty, Patrick S
2005-12-01
Degenerate codon libraries are frequently used in protein engineering and evolution studies but are often limited to targeting a small number of positions to adequately limit the search space. To mitigate this, codon degeneracy can be limited using heuristics or previous knowledge of the targeted positions. To automate design of libraries given a set of amino acid sequences, an algorithm (LibDesign) was developed that generates a set of possible degenerate codon libraries, their resulting size, and their score relative to a user-defined scoring function. A gene library of a specified size can then be constructed that is representative of the given amino acid distribution or that includes specific sequences or combinations thereof. LibDesign provides a new tool for automated design of high-quality protein libraries that more effectively harness existing sequence-structure information derived from multiple sequence alignment or computational protein design data.
Libraries for Software Use on Peregrine | High-Performance Computing | NREL
-specific libraries. Libraries List Name Description BLAS Basic Linear Algebra Subroutines, libraries only managing hierarchically structured data. LAPACK Standard Netlib offering for computational linear algebra
Surgical pathology report in the era of desktop publishing.
Pillarisetti, S G
1993-01-01
Since it is believed that "a picture is worth a thousand words," incorporation of computer-generated line art was used as a adjunct to gross description in surgical pathology reporting in selected cases. The lack of an integrated software program was overcome by using commercially available graphic and word processing software. A library of drawings was developed over the last few years. Most time-consuming is the development of templates and the graphic library. With some effort it is possible to integrate graphics of high quality into surgical pathology reports.
Evaluating the Impact of Library Instruction Methods on the Quality of Student Research.
ERIC Educational Resources Information Center
Ackerson, Linda G.; Young, Virgina E.
1994-01-01
A three-year study at the University of Alabama compared a traditional lecture method for teaching library research skills with a course-integrated, computer-enhanced approach by assessing each method's impact on the quality of bibliographies from engineering students' term papers. In four of the five semesters, no significant differences were…
Development library of finite elements for computer-aided design system of reed sensors
NASA Astrophysics Data System (ADS)
Kozlov, A. S.; Shmakov, N. A.; Tkalich, V. L.; Labkovskaia, R. I.; Kalinkina, M. E.; Pirozhnikova, O. I.
2018-05-01
The article is devoted to the development of a modern highly reliable element base of devices for security and fire alarm systems, in particular, to the improvement of the quality of contact cores (reed and membrane) of reed sensors. Modeling of elastic sensitive elements uses quadrangular elements of plates and shells, considered in the system of curvilinear orthogonal coordinates. The developed mathematical models and the formed finite element library are designed for systems of automated design of reed switch detectors to create competitive devices alarms. The finite element library is used for the automated system production of reed switch detectors both in series production and in the implementation of individual orders.
Design and implementation of a cloud based lithography illumination pupil processing application
NASA Astrophysics Data System (ADS)
Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie
2017-02-01
Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.
Statistical molecular design of balanced compound libraries for QSAR modeling.
Linusson, A; Elofsson, M; Andersson, I E; Dahlgren, M K
2010-01-01
A fundamental step in preclinical drug development is the computation of quantitative structure-activity relationship (QSAR) models, i.e. models that link chemical features of compounds with activities towards a target macromolecule associated with the initiation or progression of a disease. QSAR models are computed by combining information on the physicochemical and structural features of a library of congeneric compounds, typically assembled from two or more building blocks, and biological data from one or more in vitro assays. Since the models provide information on features affecting the compounds' biological activity they can be used as guides for further optimization. However, in order for a QSAR model to be relevant to the targeted disease, and drug development in general, the compound library used must contain molecules with balanced variation of the features spanning the chemical space believed to be important for interaction with the biological target. In addition, the assays used must be robust and deliver high quality data that are directly related to the function of the biological target and the associated disease state. In this review, we discuss and exemplify the concept of statistical molecular design (SMD) in the selection of building blocks and final synthetic targets (i.e. compounds to synthesize) to generate information-rich, balanced libraries for biological testing and computation of QSAR models.
Ameisen, David; Deroulers, Christophe; Perrier, Valérie; Bouhidel, Fatiha; Battistella, Maxime; Legrès, Luc; Janin, Anne; Bertheau, Philippe; Yunès, Jean-Baptiste
2014-01-01
Since microscopic slides can now be automatically digitized and integrated in the clinical workflow, quality assessment of Whole Slide Images (WSI) has become a crucial issue. We present a no-reference quality assessment method that has been thoroughly tested since 2010 and is under implementation in multiple sites, both public university-hospitals and private entities. It is part of the FlexMIm R&D project which aims to improve the global workflow of digital pathology. For these uses, we have developed two programming libraries, in Java and Python, which can be integrated in various types of WSI acquisition systems, viewers and image analysis tools. Development and testing have been carried out on a MacBook Pro i7 and on a bi-Xeon 2.7GHz server. Libraries implementing the blur assessment method have been developed in Java, Python, PHP5 and MySQL5. For web applications, JavaScript, Ajax, JSON and Sockets were also used, as well as the Google Maps API. Aperio SVS files were converted into the Google Maps format using VIPS and Openslide libraries. We designed the Java library as a Service Provider Interface (SPI), extendable by third parties. Analysis is computed in real-time (3 billion pixels per minute). Tests were made on 5000 single images, 200 NDPI WSI, 100 Aperio SVS WSI converted to the Google Maps format. Applications based on our method and libraries can be used upstream, as calibration and quality control tool for the WSI acquisition systems, or as tools to reacquire tiles while the WSI is being scanned. They can also be used downstream to reacquire the complete slides that are below the quality threshold for surgical pathology analysis. WSI may also be displayed in a smarter way by sending and displaying the regions of highest quality before other regions. Such quality assessment scores could be integrated as WSI's metadata shared in clinical, research or teaching contexts, for a more efficient medical informatics workflow.
phpMs: A PHP-Based Mass Spectrometry Utilities Library.
Collins, Andrew; Jones, Andrew R
2018-03-02
The recent establishment of cloud computing, high-throughput networking, and more versatile web standards and browsers has led to a renewed interest in web-based applications. While traditionally big data has been the domain of optimized desktop and server applications, it is now possible to store vast amounts of data and perform the necessary calculations offsite in cloud storage and computing providers, with the results visualized in a high-quality cross-platform interface via a web browser. There are number of emerging platforms for cloud-based mass spectrometry data analysis; however, there is limited pre-existing code accessible to web developers, especially for those that are constrained to a shared hosting environment where Java and C applications are often forbidden from use by the hosting provider. To remedy this, we provide an open-source mass spectrometry library for one of the most commonly used web development languages, PHP. Our new library, phpMs, provides objects for storing and manipulating spectra and identification data as well as utilities for file reading, file writing, calculations, peptide fragmentation, and protein digestion as well as a software interface for controlling search engines. We provide a working demonstration of some of the capabilities at http://pgb.liv.ac.uk/phpMs .
Towards the construction of high-quality mutagenesis libraries.
Li, Heng; Li, Jing; Jin, Ruinan; Chen, Wei; Liang, Chaoning; Wu, Jieyuan; Jin, Jian-Ming; Tang, Shuang-Yan
2018-07-01
To improve the quality of mutagenesis libraries in directed evolution strategy. In the process of library transformation, transformants which have been shown to take up more than one plasmid might constitute more than 20% of the constructed library, thereby extensively impairing the quality of the library. We propose a practical transformation method to prevent the occurrence of multiple-plasmid transformants while maintaining high transformation efficiency. A visual library model containing plasmids expressing different fluorescent proteins was used. Multiple-plasmid transformants can be reduced through optimizing plasmid DNA amount used for transformation based on the positive correlation between the occurrence frequency of multiple-plasmid transformants and the logarithmic ratio of plasmid molecules to competent cells. This method provides a simple solution for a seemingly common but often neglected problem, and should be valuable for improving the quality of mutagenesis libraries to enhance the efficiency of directed evolution strategies.
Evaluation of hydrothermal resources of North Dakota. Phase II. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, K.L.; Howell, F.L.; Winczewski, L.M.
1981-06-01
The Phase II activities dealt with three main topical areas: geothermal gradient and heat-flow studies, stratigraphic studies, and water quality studies. Efforts were concentrated on Mesozoic and Cenozoic rocks. The geothermal gradient and heat-flow studies involved running temperature logs in groundwater observation holes in areas of interest, and locating, obtaining access to, and casing holes of convenience to be used as heat-flow determination sites. The stratigraphic and water quality studies involved two main efforts: updating and expanding WELLFILE and assembling a computer library system (WELLCAT) for all water wells drilled in the state. WATERCAT combines data from the United Statesmore » Geological Survey Water Resources Division's WATSTOR and GWST computer libraries; and includes physical, stratigraphic, and water quality data. Goals, methods, and results are presented.« less
Construction of High-Quality Camel Immune Antibody Libraries.
Romão, Ema; Poignavent, Vianney; Vincke, Cécile; Ritzenthaler, Christophe; Muyldermans, Serge; Monsion, Baptiste
2018-01-01
Single-domain antibodies libraries of heavy-chain only immunoglobulins from camelids or shark are enriched for high-affinity antigen-specific binders by a short in vivo immunization. Thus, potent binders are readily retrieved from relatively small-sized libraries of 10 7 -10 8 individual transformants, mostly after phage display and panning on a purified target. However, the remaining drawback of this strategy arises from the need to generate a dedicated library, for nearly every envisaged target. Therefore, all the procedures that shorten and facilitate the construction of an immune library of best possible quality are definitely a step forward. In this chapter, we provide the protocol to generate a high-quality immune VHH library using the Golden Gate Cloning strategy employing an adapted phage display vector where a lethal ccdB gene has to be substituted by the VHH gene. With this procedure, the construction of the library can be shortened to less than a week starting from bleeding the animal. Our libraries exceed 10 8 individual transformants and close to 100% of the clones harbor a phage display vector having an insert with the length of a VHH gene. These libraries are also more economic to make than previous standard approaches using classical restriction enzymes and ligations. The quality of the Nanobodies that are retrieved from immune libraries obtained by Golden Gate Cloning is identical to those from immune libraries made according to the classical procedure.
Case Studies in Library Computer Systems.
ERIC Educational Resources Information Center
Palmer, Richard Phillips
Twenty descriptive case studies of computer applications in a variety of libraries are presented in this book. Computerized circulation, serial and acquisition systems in public, high school, college, university and business libraries are included. Each of the studies discusses: 1) the environment in which the system operates, 2) the objectives of…
xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina
Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less
xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit
Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina; ...
2017-03-01
Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less
Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A
2012-01-01
Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.
Ayres, Daniel L.; Darling, Aaron; Zwickl, Derrick J.; Beerli, Peter; Holder, Mark T.; Lewis, Paul O.; Huelsenbeck, John P.; Ronquist, Fredrik; Swofford, David L.; Cummings, Michael P.; Rambaut, Andrew; Suchard, Marc A.
2012-01-01
Abstract Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software. PMID:21963610
Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers
NASA Astrophysics Data System (ADS)
Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi
2018-03-01
Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
2014-01-01
Background Since microscopic slides can now be automatically digitized and integrated in the clinical workflow, quality assessment of Whole Slide Images (WSI) has become a crucial issue. We present a no-reference quality assessment method that has been thoroughly tested since 2010 and is under implementation in multiple sites, both public university-hospitals and private entities. It is part of the FlexMIm R&D project which aims to improve the global workflow of digital pathology. For these uses, we have developed two programming libraries, in Java and Python, which can be integrated in various types of WSI acquisition systems, viewers and image analysis tools. Methods Development and testing have been carried out on a MacBook Pro i7 and on a bi-Xeon 2.7GHz server. Libraries implementing the blur assessment method have been developed in Java, Python, PHP5 and MySQL5. For web applications, JavaScript, Ajax, JSON and Sockets were also used, as well as the Google Maps API. Aperio SVS files were converted into the Google Maps format using VIPS and Openslide libraries. Results We designed the Java library as a Service Provider Interface (SPI), extendable by third parties. Analysis is computed in real-time (3 billion pixels per minute). Tests were made on 5000 single images, 200 NDPI WSI, 100 Aperio SVS WSI converted to the Google Maps format. Conclusions Applications based on our method and libraries can be used upstream, as calibration and quality control tool for the WSI acquisition systems, or as tools to reacquire tiles while the WSI is being scanned. They can also be used downstream to reacquire the complete slides that are below the quality threshold for surgical pathology analysis. WSI may also be displayed in a smarter way by sending and displaying the regions of highest quality before other regions. Such quality assessment scores could be integrated as WSI's metadata shared in clinical, research or teaching contexts, for a more efficient medical informatics workflow. PMID:25565494
NASA Astrophysics Data System (ADS)
Brockmann, J. M.; Schuh, W.-D.
2011-07-01
The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
ERIC Educational Resources Information Center
Rodriguez-Lesmes, Paul; Trujillo, Jose Daniel; Valderrama, Daniel
2015-01-01
This paper analyzes the relation between public, education-related infrastructure and the quality of education in schools. The analysis uses a case study of the establishment of two large, high-quality public libraries in low-income areas in Bogotá, Colombia. It assesses the impact of these libraries on the quality of education by comparing…
A Method of Predicting Queuing at Library Online PCs
ERIC Educational Resources Information Center
Beranek, Lea G.
2006-01-01
On-campus networked personal computer (PC) usage at La Trobe University Library was surveyed during September 2005. The survey's objectives were to confirm peak usage times, to measure some of the relevant parameters of online PC usage, and to determine the effect that 24 new networked PCs had on service quality. The survey found that clients…
Application Reuse Library for Software, Requirements, and Guidelines
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Thronesbery, Carroll
1994-01-01
Better designs are needed for expert systems and other operations automation software, for more reliable, usable and effective human support. A prototype computer-aided Application Reuse Library shows feasibility of supporting concurrent development and improvement of advanced software by users, analysts, software developers, and human-computer interaction experts. Such a library expedites development of quality software, by providing working, documented examples, which support understanding, modification and reuse of requirements as well as code. It explicitly documents and implicitly embodies design guidelines, standards and conventions. The Application Reuse Library provides application modules with Demo-and-Tester elements. Developers and users can evaluate applicability of a library module and test modifications, by running it interactively. Sub-modules provide application code and displays and controls. The library supports software modification and reuse, by providing alternative versions of application and display functionality. Information about human support and display requirements is provided, so that modifications will conform to guidelines. The library supports entry of new application modules from developers throughout an organization. Example library modules include a timer, some buttons and special fonts, and a real-time data interface program. The library prototype is implemented in the object-oriented G2 environment for developing real-time expert systems.
Deep sequencing in library selection projects: what insight does it bring?
Glanville, J; D'Angelo, S; Khan, T A; Reddy, S T; Naranjo, L; Ferrara, F; Bradbury, A R M
2015-08-01
High throughput sequencing is poised to change all aspects of the way antibodies and other binders are discovered and engineered. Millions of available sequence reads provide an unprecedented sampling depth able to guide the design and construction of effective, high quality naïve libraries containing tens of billions of unique molecules. Furthermore, during selections, high throughput sequencing enables quantitative tracing of enriched clones and position-specific guidance to amino acid variation under positive selection during antibody engineering. Successful application of the technologies relies on specific PCR reagent design, correct sequencing platform selection, and effective use of computational tools and statistical measures to remove error, identify antibodies, estimate diversity, and extract signatures of selection from the clone down to individual structural positions. Here we review these considerations and discuss some of the remaining challenges to the widespread adoption of the technology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Deep sequencing in library selection projects: what insight does it bring?
Glanville, J; D’Angelo, S; Khan, T.A.; Reddy, S. T.; Naranjo, L.; Ferrara, F.; Bradbury, A.R.M.
2015-01-01
High throughput sequencing is poised to change all aspects of the way antibodies and other binders are discovered and engineered. Millions of available sequence reads provide an unprecedented sampling depth able to guide the design and construction of effective, high quality naïve libraries containing tens of billions of unique molecules. Furthermore, during selections, high throughput sequencing enables quantitative tracing of enriched clones and position-specific guidance to amino acid variation under positive selection during antibody engineering. Successful application of the technologies relies on specific PCR reagent design, correct sequencing platform selection, and effective use of computational tools and statistical measures to remove error, identify antibodies, estimate diversity, and extract signatures of selection from the clone down to individual structural positions. Here we review these considerations and discuss some of the remaining challenges to the widespread adoption of the technology. PMID:26451649
A standard for measuring metadata quality in spectral libraries
NASA Astrophysics Data System (ADS)
Rasaiah, B.; Jones, S. D.; Bellman, C.
2013-12-01
A standard for measuring metadata quality in spectral libraries Barbara Rasaiah, Simon Jones, Chris Bellman RMIT University Melbourne, Australia barbara.rasaiah@rmit.edu.au, simon.jones@rmit.edu.au, chris.bellman@rmit.edu.au ABSTRACT There is an urgent need within the international remote sensing community to establish a metadata standard for field spectroscopy that ensures high quality, interoperable metadata sets that can be archived and shared efficiently within Earth observation data sharing systems. Metadata are an important component in the cataloguing and analysis of in situ spectroscopy datasets because of their central role in identifying and quantifying the quality and reliability of spectral data and the products derived from them. This paper presents approaches to measuring metadata completeness and quality in spectral libraries to determine reliability, interoperability, and re-useability of a dataset. Explored are quality parameters that meet the unique requirements of in situ spectroscopy datasets, across many campaigns. Examined are the challenges presented by ensuring that data creators, owners, and data users ensure a high level of data integrity throughout the lifecycle of a dataset. Issues such as field measurement methods, instrument calibration, and data representativeness are investigated. The proposed metadata standard incorporates expert recommendations that include metadata protocols critical to all campaigns, and those that are restricted to campaigns for specific target measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. Approaches towards an operational and logistically viable implementation of a quality standard are discussed. This paper also proposes a way forward for adapting and enhancing current geospatial metadata standards to the unique requirements of field spectroscopy metadata quality. [0430] BIOGEOSCIENCES / Computational methods and data processing [0480] BIOGEOSCIENCES / Remote sensing [1904] INFORMATICS / Community standards [1912] INFORMATICS / Data management, preservation, rescue [1926] INFORMATICS / Geospatial [1930] INFORMATICS / Data and information governance [1946] INFORMATICS / Metadata [1952] INFORMATICS / Modeling [1976] INFORMATICS / Software tools and services [9810] GENERAL OR MISCELLANEOUS / New fields
Darwin Assembly: fast, efficient, multi-site bespoke mutagenesis
Cozens, Christopher
2018-01-01
Abstract Engineering proteins for designer functions and biotechnological applications almost invariably requires (or at least benefits from) multiple mutations to non-contiguous residues. Several methods for multiple site-directed mutagenesis exist, but there remains a need for fast and simple methods to efficiently introduce such mutations – particularly for generating large, high quality libraries for directed evolution. Here, we present Darwin Assembly, which can deliver high quality libraries of >108 transformants, targeting multiple (>10) distal sites with minimal wild-type contamination (<0.25% of total population) and which takes a single working day from purified plasmid to library transformation. We demonstrate its efficacy with whole gene codon reassignment of chloramphenicol acetyl transferase, mutating 19 codons in a single reaction in KOD DNA polymerase and generating high quality, multiple-site libraries in T7 RNA polymerase and Tgo DNA polymerase. Darwin Assembly uses commercially available enzymes, can be readily automated, and offers a cost-effective route to highly complex and customizable library generation. PMID:29409059
Oberacher, Herbert
2013-01-01
The “Critical Assessment of Small Molecule Identification” (CASMI) contest was aimed in testing strategies for small molecule identification that are currently available in the experimental and computational mass spectrometry community. We have applied tandem mass spectral library search to solve Category 2 of the CASMI Challenge 2012 (best identification for high resolution LC/MS data). More than 230,000 tandem mass spectra part of four well established libraries (MassBank, the collection of tandem mass spectra of the “NIST/NIH/EPA Mass Spectral Library 2012”, METLIN, and the ‘Wiley Registry of Tandem Mass Spectral Data, MSforID’) were searched. The sample spectra acquired in positive ion mode were processed. Seven out of 12 challenges did not produce putative positive matches, simply because reference spectra were not available for the compounds searched. This suggests that to some extent the limited coverage of chemical space with high-quality reference spectra is still a problem encountered in tandem mass spectral library search. Solutions were submitted for five challenges. Three compounds were correctly identified (kanamycin A, benzyldiphenylphosphine oxide, and 1-isopropyl-5-methyl-1H-indole-2,3-dione). In the absence of any reference spectrum, a false positive identification was obtained for 1-aminoanthraquinone by matching the corresponding sample spectrum to the structurally related compounds N-phenylphthalimide and 2-aminoanthraquinone. Another false positive result was submitted for 1H-benz[g]indole; for the 1H-benz[g]indole-specific sample spectra provided, carbazole was listed as the best matching compound. In this case, the quality of the available 1H-benz[g]indole-specific reference spectra was found to hamper unequivocal identification. PMID:24957994
Computer Utilization by Schools: An Example.
ERIC Educational Resources Information Center
Tondow, Murray
1968-01-01
The Educational Data Services Department of the Palo Alto Unified School District is responsible for implementing data processing needs to improve the quality of education in Palo Alto, California. Information from the schools enters the Department data library to be scanned, coded, and corrected prior to IBM 1620 computer input. Operating 17…
High impact technologies for natural products screening.
Koehn, Frank E
2008-01-01
Natural products have historically been a rich source of lead molecules in drug discovery. However, natural products have been de-emphasized as high throughput screening resources in the recent past, in part because of difficulties in obtaining high quality natural products screening libraries, or in applying modern screening assays to these libraries. In addition, natural products programs based on screening of extract libraries, bioassay-guided isolation, structure elucidation and subsequent production scale-up are challenged to meet the rapid cycle times that are characteristic of the modern HTS approach. Fortunately, new technologies in mass spectrometry, NMR and other spectroscopic techniques can greatly facilitate the first components of the process - namely the efficient creation of high-quality natural products libraries, bimolecular target or cell-based screening, and early hit characterization. The success of any high throughput screening campaign is dependent on the quality of the chemical library. The construction and maintenance of a high quality natural products library, whether based on microbial, plant, marine or other sources is a costly endeavor. The library itself may be composed of samples that are themselves mixtures - such as crude extracts, semi-pure mixtures or single purified natural products. Each of these library designs carries with it distinctive advantages and disadvantages. Crude extract libraries have lower resource requirements for sample preparation, but high requirements for identification of the bioactive constituents. Pre-fractionated libraries can be an effective strategy to alleviate interferences encountered with crude libraries, and may shorten the time needed to identify the active principle. Purified natural product libraries require substantial resources for preparation, but offer the advantage that the hit detection process is reduced to that of synthetic single component libraries. Whether the natural products library consists of crude or partially fractionated mixtures, the library contents should be profiled to identify the known components present - a process known as dereplication. The use of mass spectrometry and HPLC-mass spectrometry together with spectral databases is a powerful tool in the chemometric profiling of bio-sources for natural product production. High throughput, high sensitivity flow NMR is an emerging tool in this area as well. Whether by cell based or biomolecular target based assays, screening of natural product extract libraries continues to furnish novel lead molecules for further drug development, despite challenges in the analysis and prioritization of natural products hits. Spectroscopic techniques are now being used to directly screen natural product and synthetic libraries. Mass spectrometry in the form of methods such as ESI-ICRFTMS, and FACS-MS as well as NMR methods such as SAR by NMR and STD-NMR have been utilized to effectively screen molecular libraries. Overall, emerging advances in mass spectrometry, NMR and other technologies are making it possible to overcome the challenges encountered in screening natural products libraries in today's drug discovery environment. As we apply these technologies and develop them even further, we can look forward to increased impact of natural products in the HTS based drug discovery.
Wei, Hong-Ying; Huang, Sheng; Wang, Jiang-Yong; Gao, Fang; Jiang, Jing-Zhe
2018-03-01
The emergence and widespread use of high-throughput sequencing technologies have promoted metagenomic studies on environmental or animal samples. Library construction for metagenome sequencing and annotation of the produced sequence reads are important steps in such studies and influence the quality of metagenomic data. In this study, we collected some marine mollusk samples, such as Crassostrea hongkongensis, Chlamys farreri, and Ruditapes philippinarum, from coastal areas in South China. These samples were divided into two batches to compare two library construction methods for shellfish viral metagenome. Our analysis showed that reverse-transcribing RNA into cDNA and then amplifying it simultaneously with DNA by whole genome amplification (WGA) yielded a larger amount of DNA compared to using only WGA or WTA (whole transcriptome amplification). Moreover, higher quality libraries were obtained by agarose gel extraction rather than with AMPure bead size selection. However, the latter can also provide good results if combined with the adjustment of the filter parameters. This, together with its simplicity, makes it a viable alternative. Finally, we compared three annotation tools (BLAST, DIAMOND, and Taxonomer) and two reference databases (NCBI's NR and Uniprot's Uniref). Considering the limitations of computing resources and data transfer speed, we propose the use of DIAMOND with Uniref for annotating metagenomic short reads as its running speed can guarantee a good annotation rate. This study may serve as a useful reference for selecting methods for Shellfish viral metagenome library construction and read annotation.
AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data
NASA Astrophysics Data System (ADS)
Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin
2018-01-01
In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.
A reliable computational workflow for the selection of optimal screening libraries.
Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch
2015-01-01
The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic components, it can be easily adapted and reproduced by computational groups interested in rational selection of screening libraries. Furthermore, the workflow could be readily modified to include additional components. This workflow has been routinely used in our laboratory for the selection of libraries in multiple projects and consistently selects libraries which are well balanced across multiple parameters.Graphical abstract.
Complementary DNA libraries: an overview.
Ying, Shao-Yao
2004-07-01
The generation of complete and full-length cDNA libraries for potential functional assays of specific gene sequences is essential for most molecules in biotechnology and biomedical research. The field of cDNA library generation has changed rapidly in the past 10 yr. This review presents an overview of the method available for the basic information of generating cDNA libraries, including the definition of the cDNA library, different kinds of cDNA libraries, difference between methods for cDNA library generation using conventional approaches and a novel strategy, and the quality of cDNA libraries. It is anticipated that the high-quality cDNA libraries so generated would facilitate studies involving genechips and the microarray, differential display, subtractive hybridization, gene cloning, and peptide library generation.
Building a High-Tech Library in a Period of Austerity.
ERIC Educational Resources Information Center
Bazillion, Richard J.; Scott, Sue
1991-01-01
Describes the planning process for designing a new library for Algoma University College (Ontario). Topics discussed include the building committee, library policy, design considerations, an electric system that supports computer technology, library automation, the online public access catalog (OPAC), furnishings and interior environment, and…
Leveraging OpenStudio's Application Programming Interfaces: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, N.; Ball, B.; Goldwasser, D.
2013-11-01
OpenStudio development efforts have been focused on providing Application Programming Interfaces (APIs) where users are able to extend OpenStudio without the need to compile the open source libraries. This paper will discuss the basic purposes and functionalities of the core libraries that have been wrapped with APIs including the Building Model, Results Processing, Advanced Analysis, UncertaintyQuantification, and Data Interoperability through Translators. Several building energy modeling applications have been produced using OpenStudio's API and Software Development Kits (SDK) including the United States Department of Energy's Asset ScoreCalculator, a mobile-based audit tool, an energy design assistance reporting protocol, and a portfolio scalemore » incentive optimization analysismethodology. Each of these software applications will be discussed briefly and will describe how the APIs were leveraged for various uses including high-level modeling, data transformations from detailed building audits, error checking/quality assurance of models, and use of high-performance computing for mass simulations.« less
Initial Ada components evaluation
NASA Technical Reports Server (NTRS)
Moebes, Travis
1989-01-01
The SAIC has the responsibility for independent test and validation of the SSE. They have been using a mathematical functions library package implemented in Ada to test the SSE IV and V process. The library package consists of elementary mathematical functions and is both machine and accuracy independent. The SSE Ada components evaluation includes code complexity metrics based on Halstead's software science metrics and McCabe's measure of cyclomatic complexity. Halstead's metrics are based on the number of operators and operands on a logical unit of code and are compiled from the number of distinct operators, distinct operands, and total number of occurrences of operators and operands. These metrics give an indication of the physical size of a program in terms of operators and operands and are used diagnostically to point to potential problems. McCabe's Cyclomatic Complexity Metrics (CCM) are compiled from flow charts transformed to equivalent directed graphs. The CCM is a measure of the total number of linearly independent paths through the code's control structure. These metrics were computed for the Ada mathematical functions library using Software Automated Verification and Validation (SAVVAS), the SSE IV and V tool. A table with selected results was shown, indicating that most of these routines are of good quality. Thresholds for the Halstead measures indicate poor quality if the length metric exceeds 260 or difficulty is greater than 190. The McCabe CCM indicated a high quality of software products.
Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K
2017-02-01
We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.
Unlocking Short Read Sequencing for Metagenomics
Rodrigue, Sébastien; Materna, Arne C.; Timberlake, Sonia C.; ...
2010-07-28
We describe an experimental and computational pipeline yielding millions of reads that can exceed 200 bp with quality scores approaching that of traditional Sanger sequencing. The method combines an automatable gel-less library construction step with paired-end sequencing on a short-read instrument. With appropriately sized library inserts, mate-pair sequences can overlap, and we describe the SHERA software package that joins them to form a longer composite read.
USGS Digital Spectral Library splib06a
Clark, Roger N.; Swayze, Gregg A.; Wise, Richard A.; Livo, K. Eric; Hoefen, Todd M.; Kokaly, Raymond F.; Sutley, Stephen J.
2007-01-01
Introduction We have assembled a digital reflectance spectral library that covers the wavelength range from the ultraviolet to far infrared along with sample documentation. The library includes samples of minerals, rocks, soils, physically constructed as well as mathematically computed mixtures, plants, vegetation communities, microorganisms, and man-made materials. The samples and spectra collected were assembled for the purpose of using spectral features for the remote detection of these and similar materials. Analysis of spectroscopic data from laboratory, aircraft, and spacecraft instrumentation requires a knowledge base. The spectral library discussed here forms a knowledge base for the spectroscopy of minerals and related materials of importance to a variety of research programs being conducted at the U.S. Geological Survey. Much of this library grew out of the need for spectra to support imaging spectroscopy studies of the Earth and planets. Imaging spectrometers, such as the National Aeronautics and Space Administration (NASA) Airborne Visible/Infra Red Imaging Spectrometer (AVIRIS) or the NASA Cassini Visual and Infrared Mapping Spectrometer (VIMS) which is currently orbiting Saturn, have narrow bandwidths in many contiguous spectral channels that permit accurate definition of absorption features in spectra from a variety of materials. Identification of materials from such data requires a comprehensive spectral library of minerals, vegetation, man-made materials, and other subjects in the scene. Our research involves the use of the spectral library to identify the components in a spectrum of an unknown. Therefore, the quality of the library must be very good. However, the quality required in a spectral library to successfully perform an investigation depends on the scientific questions to be answered and the type of algorithms to be used. For example, to map a mineral using imaging spectroscopy and the mapping algorithm of Clark and others (1990a, 2003b), one simply needs a diagnostic absorption band. The mapping system uses continuum-removed reference spectral features fitted to features in observed spectra. Spectral features for such algorithms can be obtained from a spectrum of a sample containing large amounts of contaminants, including those that add other spectral features, as long as the shape of the diagnostic feature of interest is not modified. If, however, the data are needed for radiative transfer models to derive mineral abundances from reflectance spectra, then completely uncontaminated spectra are required. This library contains spectra that span a range of quality, with purity indicators to flag spectra for (or against) particular uses. Acquiring spectral measurements and performing sample characterizations for this library has taken about 15 person-years of effort. Software to manage the library and provide scientific analysis capability is provided (Clark, 1980, 1993). A personal computer (PC) reader for the library is also available (Livo and others, 1993). The program reads specpr binary files (Clark, 1980, 1993) and plots spectra. Another program that reads the specpr format is written in IDL (Kokaly, 2005). In our view, an ideal spectral library consists of samples covering a very wide range of materials, has large wavelength range with very high precision, and has enough sample analyses and documentation to establish the quality of the spectra. Time and available resources limit what can be achieved. Ideally, for each mineral, the sample analysis would include X-ray diffraction (XRD), electron microprobe (EM) or X-ray fluorescence (XRF), and petrographic microscopic analyses. For some minerals, such as iron oxides, additional analyses such as Mossbauer would be helpful. We have found that to make the basic spectral measurements, provide XRD, EM or XRF analyses, and microscopic analyses, document the results, and complete an entry of one spectral library sample, all takes about
Library construction for next-generation sequencing: Overviews and challenges
Head, Steven R.; Komori, H. Kiyomi; LaMere, Sarah A.; Whisenant, Thomas; Van Nieuwerburgh, Filip; Salomon, Daniel R.; Ordoukhanian, Phillip
2014-01-01
High-throughput sequencing, also known as next-generation sequencing (NGS), has revolutionized genomic research. In recent years, NGS technology has steadily improved, with costs dropping and the number and range of sequencing applications increasing exponentially. Here, we examine the critical role of sequencing library quality and consider important challenges when preparing NGS libraries from DNA and RNA sources. Factors such as the quantity and physical characteristics of the RNA or DNA source material as well as the desired application (i.e., genome sequencing, targeted sequencing, RNA-seq, ChIP-seq, RIP-seq, and methylation) are addressed in the context of preparing high quality sequencing libraries. In addition, the current methods for preparing NGS libraries from single cells are also discussed. PMID:24502796
AstroCV: Astronomy computer vision library
NASA Astrophysics Data System (ADS)
González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.
2018-04-01
AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.
Mohaghegh, Niloofar; Raiesi Dehkordi, Puran; Alibeik, MohammadReza; Ghashghaee, Ahmad; Janbozorgi, Mojgan
2016-01-01
Background: In-service training courses are one of the most available programs that are used to improve the quantity and quality level of the staff services in various organizations, including libraries and information centers. With the advent of new technologies in the field of education, the problems and shortcomings of traditional in-service training courses were replaced with virtual ones. This study aimed to evaluate the virtual in-service training courses from the librarians' point of view in libraries of state universities of medical sciences in Tehran. Methods: This was a descriptive- analytical study. The statistical population consisted of all librarians at libraries of universities of medical sciences in Tehran. Out of 103 librarians working in the libraries under the study, 93 (90%) participated in this study. Data were collected, using a questionnaire. Results: The results revealed that 94/6% of librarians were satisfied to participate in virtual in-service training courses. In this study, only 45 out of 93 participants said that the virtual in-service courses were held in their libraries. Of the participants, 75.6% were satisfied with the length of training courses, and one month seemed to be adequate time duration for the librarians to be more satisfied. The satisfaction level of the individuals who participated in in-service courses of the National Library was moderate to high. A total of 84.4% participants announced that the productivity level of the training courses was moderate to high. The most important problem with which the librarians were confronted in virtual in-service training was the "low speed of the internet and inadequate computer substructures". Conclusion: Effectiveness of in-service training courses from librarians' point of view was at an optimal level in the studied libraries.
Mohaghegh, Niloofar; Raiesi Dehkordi, Puran; Alibeik, MohammadReza; Ghashghaee, Ahmad; Janbozorgi, Mojgan
2016-01-01
Background: In-service training courses are one of the most available programs that are used to improve the quantity and quality level of the staff services in various organizations, including libraries and information centers. With the advent of new technologies in the field of education, the problems and shortcomings of traditional in-service training courses were replaced with virtual ones. This study aimed to evaluate the virtual in-service training courses from the librarians' point of view in libraries of state universities of medical sciences in Tehran. Methods: This was a descriptive- analytical study. The statistical population consisted of all librarians at libraries of universities of medical sciences in Tehran. Out of 103 librarians working in the libraries under the study, 93 (90%) participated in this study. Data were collected, using a questionnaire. Results: The results revealed that 94/6% of librarians were satisfied to participate in virtual in-service training courses. In this study, only 45 out of 93 participants said that the virtual in-service courses were held in their libraries. Of the participants, 75.6% were satisfied with the length of training courses, and one month seemed to be adequate time duration for the librarians to be more satisfied. The satisfaction level of the individuals who participated in in-service courses of the National Library was moderate to high. A total of 84.4% participants announced that the productivity level of the training courses was moderate to high. The most important problem with which the librarians were confronted in virtual in-service training was the "low speed of the internet and inadequate computer substructures". Conclusion: Effectiveness of in-service training courses from librarians’ point of view was at an optimal level in the studied libraries. PMID:28491833
Willighagen, Egon L; Mayfield, John W; Alvarsson, Jonathan; Berg, Arvid; Carlsson, Lars; Jeliazkova, Nina; Kuhn, Stefan; Pluskal, Tomáš; Rojas-Chertó, Miquel; Spjuth, Ola; Torrance, Gilleain; Evelo, Chris T; Guha, Rajarshi; Steinbeck, Christoph
2017-06-06
The Chemistry Development Kit (CDK) is a widely used open source cheminformatics toolkit, providing data structures to represent chemical concepts along with methods to manipulate such structures and perform computations on them. The library implements a wide variety of cheminformatics algorithms ranging from chemical structure canonicalization to molecular descriptor calculations and pharmacophore perception. It is used in drug discovery, metabolomics, and toxicology. Over the last 10 years, the code base has grown significantly, however, resulting in many complex interdependencies among components and poor performance of many algorithms. We report improvements to the CDK v2.0 since the v1.2 release series, specifically addressing the increased functional complexity and poor performance. We first summarize the addition of new functionality, such atom typing and molecular formula handling, and improvement to existing functionality that has led to significantly better performance for substructure searching, molecular fingerprints, and rendering of molecules. Second, we outline how the CDK has evolved with respect to quality control and the approaches we have adopted to ensure stability, including a code review mechanism. This paper highlights our continued efforts to provide a community driven, open source cheminformatics library, and shows that such collaborative projects can thrive over extended periods of time, resulting in a high-quality and performant library. By taking advantage of community support and contributions, we show that an open source cheminformatics project can act as a peer reviewed publishing platform for scientific computing software. Graphical abstract CDK 2.0 provides new features and improved performance.
Maximizing Library Storage with High-Tech Robotic Shelving
ERIC Educational Resources Information Center
Amrhein, Rick; Resetar, Donna
2004-01-01
This article presents a plan of having a new facility for the library of Valparaiso University. The authors, as dean of library services and assistant university librarian for access services at Valpo, discuss their plan of building a Center for Library and Information Resources that would house more books while also providing computing centers,…
Kamel Boulos, M N; Roudsari, A V; Gordon, C; Muir Gray, J A
2001-01-01
In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health. They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health. It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space.
High End Computer Network Testbedding at NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Gary, James Patrick
1998-01-01
The Earth & Space Data Computing (ESDC) Division, at the Goddard Space Flight Center, is involved in development and demonstrating various high end computer networking capabilities. The ESDC has several high end super computers. These are used to run: (1) computer simulation of the climate systems; (2) to support the Earth and Space Sciences (ESS) project; (3) to support the Grand Challenge (GC) Science, which is aimed at understanding the turbulent convection and dynamos in stars. GC research occurs in many sites throughout the country, and this research is enabled by, in part, the multiple high performance network interconnections. The application drivers for High End Computer Networking use distributed supercomputing to support virtual reality applications, such as TerraVision, (i.e., three dimensional browser of remotely accessed data), and Cave Automatic Virtual Environments (CAVE). Workstations can access and display data from multiple CAVE's with video servers, which allows for group/project collaborations using a combination of video, data, voice and shared white boarding. The ESDC is also developing and demonstrating the high degree of interoperability between satellite and terrestrial-based networks. To this end, the ESDC is conducting research and evaluations of new computer networking protocols and related technologies which improve the interoperability of satellite and terrestrial networks. The ESDC is also involved in the Security Proof of Concept Keystone (SPOCK) program sponsored by National Security Agency (NSA). The SPOCK activity provides a forum for government users and security technology providers to share information on security requirements, emerging technologies and new product developments. Also, the ESDC is involved in the Trans-Pacific Digital Library Experiment, which aims to demonstrate and evaluate the use of high performance satellite communications and advanced data communications protocols to enable interactive digital library data access between the U. S. Library of Congress, the National Library of Japan and other digital library sites at 155 MegaBytes Per Second. The ESDC participation in this program is the Trans-Pacific access to GLOBE visualizations in real time. ESDC is participating in the Department of Defense's ATDNet with Multiwavelength Optical Network (MONET) a fully switched Wavelength Division Networking testbed. This presentation is in viewgraph format.
West Asian Special Libraries and Information Centers.
ERIC Educational Resources Information Center
Harvey, John F.
Special libraries are defined in this paper as those libraries serving such institutions as government offices, private corporations, associations, and university departments. Information centers are similar to special libraries but provide personalized, high quality reference service, usually in science and technology, and often using mechanical…
MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1994-01-01
MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.
A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.
1999-01-01
The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.
Range Image Flow using High-Order Polynomial Expansion
2013-09-01
included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library
Building an E-Book Library: Resources for Finding the Best Apps
ERIC Educational Resources Information Center
Zipke, Marcy
2014-01-01
There is a wide range in the educational qualities and overall quality of interactive storybook apps for tablet computers (ebooks). Minimally, ebooks for young children present illustrations on a screen, accompanied by an oral reading of the text. Today's ebooks can also include animation, zooming in and out, musical scores, sound effects,…
ERIC Educational Resources Information Center
Westbrook, R. Niccole; Watkins, Sean
2012-01-01
As primary source materials in the library are digitized and made available online, the focus of related library services is shifting to include new and innovative methods of digital delivery via social media, digital storytelling, and community-based and consortial image repositories. Most images on the Web are not of sufficient quality for most…
Using the Intel Math Kernel Library on Peregrine | High-Performance
Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier
Székely, Andrea; Szekrényes, Akos; Kerékgyártó, Márta; Balogh, Attila; Kádas, János; Lázár, József; Guttman, András; Kurucz, István; Takács, László
2014-08-01
Molecular heterogeneity of mAb preparations is the result of various co- and post-translational modifications and to contaminants related to the production process. Changes in molecular composition results in alterations of functional performance, therefore quality control and validation of therapeutic or diagnostic protein products is essential. A special case is the consistent production of mAb libraries (QuantiPlasma™ and PlasmaScan™) for proteome profiling, quality control of which represents a challenge because of high number of mAbs (>1000). Here, we devise a generally applicable multicapillary SDS-gel electrophoresis process for the analysis of fluorescently labeled mAb preparations for the high throughput quality control of mAbs of the QuantiPlasma™ and PlasmaScan™ libraries. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
INTRIGOSS: A new Library of High Resolution Synthetic Spectra
NASA Astrophysics Data System (ADS)
Franchini, Mariagrazia; Morossi, Carlo; Di Marcancantonio, Paolo; Chavez, Miguel; GES-Builders
2018-01-01
INTRIGOSS (INaf Trieste Grid Of Synthetic Spectra) is a new High Resolution (HiRes) synthetic spectral library designed for studying F, G, and K stars. The library is based on atmosphere models computed with specified individual element abundances via ATLAS12 code. Normalized SPectra (NSP) and surface Flux SPectra (FSP), in the 4800-5400 Å wavelength range, were computed by means of the SPECTRUM code. The synthetic spectra are computed with an atomic and bi-atomic molecular line list including "bona fide" Predicted Lines (PLs) built by tuning loggf to reproduce very high SNR Solar spectrum and the UVES-U580 spectra of five cool giants extracted from the Gaia-ESO survey (GES). The astrophysical gf-values were then assessed by using more than 2000 stars with homogenous and accurate atmosphere parameters and detailed chemical composition from GES. The validity and greater accuracy of INTRIGOSS NSPs and FSPs with respect to other available spectral libraries is discussed. INTRIGOSS will be available on the web and will be a valuable tool for both stellar atmospheric parameters and stellar population studies.
Research Libraries' Costs of Doing Business (and Strategies for Avoiding Them)
ERIC Educational Resources Information Center
Greenstein, Daniel
2004-01-01
With today's relatively flat budgets, research libraries are finding their buying power further diminished by the hyperinflationary costs of library materials. Yet technology innovation, coupled with organizational restructuring, is enabling libraries not only to provide more high-quality information services but also to achieve unparalleled…
The Screening Compound Collection: A Key Asset for Drug Discovery.
Boss, Christoph; Hazemann, Julien; Kimmerlin, Thierry; von Korff, Modest; Lüthi, Urs; Peter, Oliver; Sander, Thomas; Siegrist, Romain
2017-10-25
In this case study on an essential instrument of modern drug discovery, we summarize our successful efforts in the last four years toward enhancing the Actelion screening compound collection. A key organizational step was the establishment of the Compound Library Committee (CLC) in September 2013. This cross-functional team consisting of computational scientists, medicinal chemists and a biologist was endowed with a significant annual budget for regular new compound purchases. Based on an initial library analysis performed in 2013, the CLC developed a New Library Strategy. The established continuous library turn-over mode, and the screening library size of 300'000 compounds were maintained, while the structural library quality was increased. This was achieved by shifting the selection criteria from 'druglike' to 'leadlike' structures, enriching for non-flat structures, aiming for compound novelty, and increasing the ratio of higher cost 'Premium Compounds'. Novel chemical space was gained by adding natural compounds, macrocycles, designed and focused libraries to the collection, and through mutual exchanges of proprietary compounds with agrochemical companies. A comparative analysis in 2016 provided evidence for the positive impact of these measures. Screening the improved library has provided several highly promising hits, including a macrocyclic compound, that are currently followed up in different Hit-to-Lead and Lead Optimization programs. It is important to state that the goal of the CLC was not to achieve higher HTS hit rates, but to increase the chances of identified hits to serve as the basis of successful early drug discovery programs. The experience gathered so far legitimates the New Library Strategy.
Human Aspects of High Tech in Special Libraries.
ERIC Educational Resources Information Center
Bichteler, Julie
1986-01-01
This investigation of library employees who spend significant portion of time in online computer interaction provides information on intellectual, psychological, social, and physical aspects of their work. Long- and short-term effects of special libraries are identified and solutions to "technostress" problems are suggested. (16…
A novel spectral library workflow to enhance protein identifications.
Li, Haomin; Zong, Nobel C; Liang, Xiangbo; Kim, Allen K; Choi, Jeong Ho; Deng, Ning; Zelaya, Ivette; Lam, Maggie; Duan, Huilong; Ping, Peipei
2013-04-09
The innovations in mass spectrometry-based investigations in proteome biology enable systematic characterization of molecular details in pathophysiological phenotypes. However, the process of delineating large-scale raw proteomic datasets into a biological context requires high-throughput data acquisition and processing. A spectral library search engine makes use of previously annotated experimental spectra as references for subsequent spectral analyses. This workflow delivers many advantages, including elevated analytical efficiency and specificity as well as reduced demands in computational capacity. In this study, we created a spectral matching engine to address challenges commonly associated with a library search workflow. Particularly, an improved sliding dot product algorithm, that is robust to systematic drifts of mass measurement in spectra, is introduced. Furthermore, a noise management protocol distinguishes spectra correlation attributed from noise and peptide fragments. It enables elevated separation between target spectral matches and false matches, thereby suppressing the possibility of propagating inaccurate peptide annotations from library spectra to query spectra. Moreover, preservation of original spectra also accommodates user contributions to further enhance the quality of the library. Collectively, this search engine supports reproducible data analyses using curated references, thereby broadening the accessibility of proteomics resources to biomedical investigators. This article is part of a Special Issue entitled: From protein structures to clinical applications. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Auten, Beth; Glauner, Dana; Lefoe, Grant; Henry, Jo
2016-01-01
This article explains how the South Piedmont Community College librarians educated faculty members about how they guide student use of the library and how it is advantageous for them to require high-quality resources for student papers. As documented by observations at SPCC and in the library literature, students use the tools they know to find…
Hit discovery and hit-to-lead approaches.
Keseru, György M; Makara, Gergely M
2006-08-01
Hit discovery technologies range from traditional high-throughput screening to affinity selection of large libraries, fragment-based techniques and computer-aided de novo design, many of which have been extensively reviewed. Development of quality leads using hit confirmation and hit-to-lead approaches present their own challenges, depending on the hit discovery method used to identify the initial hits. In this paper, we summarize common industry practices adopted to tackle hit-to-lead challenges and review how the advantages and drawbacks of different hit discovery techniques could affect the various issues hit-to-lead groups face.
Briel, L.I.
1993-01-01
A computer program was written to produce 6 different types of water-quality diagrams--Piper, Stiff, pie, X-Y, boxplot, and Piper 3-D--from the same file of input data. The Piper 3-D diagram is a new method that projects values from the surface of a Piper plot into a triangular prism to show how variations in chemical composition can be related to variations in other water-quality variables. This program is an analytical tool to aid in the interpretation of data. This program is interactive, and the user can select from a menu the type of diagram to be produced and a large number of individual features. Alternatively, these choices can be specified in the data file, which provides a batch mode for running the program. The program does not display water-quality diagrams directly; plots are written to a file. Four different plot- file formats are available: device-independent metafiles, Adobe PostScript graphics files, and two Hewlett-Packard graphics language formats (7475 and 7586). An ASCII data-table file is also produced to document the computed values. This program is written in Fortran '77 and uses graphics subroutines from either the PRIOR AGTK or the DISSPLA graphics library. The program has been implemented on Prime series 50 and Data General Aviion computers within the USGS; portability to other computing systems depends on the availability of the graphics library.
Roudsari, AV; Gordon, C; Gray, JA Muir
2001-01-01
Background In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health . They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. Objectives This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health . It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Methods Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Results Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Conclusions Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space. PMID:11720947
Baroudi, Kusai; Ibraheem, Shukran Nasser
2015-01-01
Background: This paper aimed to evaluate the application of computer-aided design and computer-aided manufacturing (CAD-CAM) technology and the factors that affect the survival of restorations. Materials and Methods: A thorough literature search using PubMed, Medline, Embase, Science Direct, Wiley Online Library and Grey literature were performed from the year 2004 up to June 2014. Only relevant research was considered. Results: The use of chair-side CAD/CAM systems is promising in all dental branches in terms of minimizing time and effort made by dentists, technicians and patients for restoring and maintaining patient oral function and aesthetic, while providing high quality outcome. Conclusion: The way of producing and placing the restorations made with the chair-side CAD/CAM (CEREC and E4D) devices is better than restorations made by conventional laboratory procedures. PMID:25954082
Fermilab | Science at Fermilab | Experiments & Projects | Intensity
Theory Computing High-performance Computing Grid Computing Networking Mass Storage Plan for the Future List Historic Results Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator
Digital Libraries Are Much More than Digitized Collections.
ERIC Educational Resources Information Center
Peters, Peter Evan
1995-01-01
The digital library encompasses the application of high-performance computers and networks to the production, distribution, management, and use of knowledge in research and education. A joint project by three federal agencies, which is investing in digital library initiatives at six universities, is discussed. A sidebar provides issues to consider…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Painter, J.; McCormick, P.; Krogh, M.
This paper presents the ACL (Advanced Computing Lab) Message Passing Library. It is a high throughput, low latency communications library, based on Thinking Machines Corp.`s CMMD, upon which message passing applications can be built. The library has been implemented on the Cray T3D, Thinking Machines CM-5, SGI workstations, and on top of PVM.
Library Blogs: What's Most Important for Success within the Enterprise?
ERIC Educational Resources Information Center
Bardyn, Tania P.
2009-01-01
Library blogs exchange information and ideas on everything from the everyday, such as library services, to the profound, such as values held by librarians (high-quality reliable resources, academic freedom, open access, and so on). According to medical librarians who maintain library blogs, a typical month includes two to four contributors writing…
Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms
Rechner, Steffen; Berger, Annabell
2016-01-01
We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442
The SCALE Verified, Archived Library of Inputs and Data - VALID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Bryan Scott; Gough, Sean T.
This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.
T7 lytic phage-displayed peptide libraries: construction and diversity characterization.
Krumpe, Lauren R H; Mori, Toshiyuki
2014-01-01
In this chapter, we describe the construction of T7 bacteriophage (phage)-displayed peptide libraries and the diversity analyses of random amino acid sequences obtained from the libraries. We used commercially available reagents, Novagen's T7Select system, to construct the libraries. Using a combination of biotinylated extension primer and streptavidin-coupled magnetic beads, we were able to prepare library DNA without applying gel purification, resulting in extremely high ligation efficiencies. Further, we describe the use of bioinformatics tools to characterize library diversity. Amino acid frequency and positional amino acid diversity and hydropathy are estimated using the REceptor LIgand Contacts website http://relic.bio.anl.gov. Peptide net charge analysis and peptide hydropathy analysis are conducted using the Genetics Computer Group Wisconsin Package computational tools. A comprehensive collection of the estimated number of recombinants and titers of T7 phage-displayed peptide libraries constructed in our lab is included.
Study on the application of mobile internet cloud computing platform
NASA Astrophysics Data System (ADS)
Gong, Songchun; Fu, Songyin; Chen, Zheng
2012-04-01
The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.
Computers, Networks, and Desegregation at San Jose High Academy.
ERIC Educational Resources Information Center
Solomon, Gwen
1987-01-01
Describes magnet high school which was created in California to meet desegregation requirements and emphasizes computer technology. Highlights include local computer networks that connect science and music labs, the library/media center, business computer lab, writing lab, language arts skills lab, and social studies classrooms; software; teacher…
Economical analysis of saturation mutagenesis experiments
Acevedo-Rocha, Carlos G.; Reetz, Manfred T.; Nov, Yuval
2015-01-01
Saturation mutagenesis is a powerful technique for engineering proteins, metabolic pathways and genomes. In spite of its numerous applications, creating high-quality saturation mutagenesis libraries remains a challenge, as various experimental parameters influence in a complex manner the resulting diversity. We explore from the economical perspective various aspects of saturation mutagenesis library preparation: We introduce a cheaper and faster control for assessing library quality based on liquid media; analyze the role of primer purity and supplier in libraries with and without redundancy; compare library quality, yield, randomization efficiency, and annealing bias using traditional and emergent randomization schemes based on mixtures of mutagenic primers; and establish a methodology for choosing the most cost-effective randomization scheme given the screening costs and other experimental parameters. We show that by carefully considering these parameters, laboratory expenses can be significantly reduced. PMID:26190439
Rowland, Mark S.; Howard, Douglas E.; Wong, James L.; Jessup, James L.; Bianchini, Greg M.; Miller, Wayne O.
2007-10-23
A real-time method and computer system for identifying radioactive materials which collects gamma count rates from a HPGe gamma-radiation detector to produce a high-resolution gamma-ray energy spectrum. A library of nuclear material definitions ("library definitions") is provided, with each uniquely associated with a nuclide or isotope material and each comprising at least one logic condition associated with a spectral parameter of a gamma-ray energy spectrum. The method determines whether the spectral parameters of said high-resolution gamma-ray energy spectrum satisfy all the logic conditions of any one of the library definitions, and subsequently uniquely identifies the material type as that nuclide or isotope material associated with the satisfied library definition. The method is iteratively repeated to update the spectrum and identification in real time.
Get It? Got It. Good!: Utilizing Get It Now Article Delivery Service at a Health Sciences Library
ERIC Educational Resources Information Center
Jarvis, Christy; Gregory, Joan M.
2016-01-01
With journal price increases continuing to outpace inflation and library collection funds remaining stagnant or shrinking, libraries are seeking innovative ways to control spending while continuing to provide patrons with high-quality content. The Spencer S. Eccles Health Sciences Library reports on the evaluation, implementation, and use of…
Microphotocomposition--A New Publishing Resource
ERIC Educational Resources Information Center
Butler, Brett; Van Pelt, John
1972-01-01
This article describes strategies, variables, and techniques employed in developing a production facility used to date for publication of some 300,000 frames of microcomposed library catalog cards, and which is now available for other graphic arts quality computer output microfilm (COM) applications. (0 references) (Author)
Computers and Media Centers--A Winning Combination.
ERIC Educational Resources Information Center
Graf, Nancy
1984-01-01
Profile of the computer program offered by the library/media center at Chief Joseph Junior High School in Richland, Washington, highlights program background, operator's licensing procedure, the trainer license, assistance from high school students, need for more computers, handling of software, and helpful hints. (EJS)
[cDNA library construction from panicle meristem of finger millet].
Radchuk, V; Pirko, Ia V; Isaenkov, S V; Emets, A I; Blium, Ia B
2014-01-01
The protocol for production of full-size cDNA using SuperScript Full-Length cDNA Library Construction Kit II (Invitrogen) was tested and high quality cDNA library from meristematic tissue of finger millet panicle (Eleusine coracana (L.) Gaertn) was created. The titer of obtained cDNA library comprised 3.01 x 10(5) CFU/ml in avarage. In average the length of cDNA insertion consisted about 1070 base pairs, the effectivity of cDNA fragment insertions--99.5%. The selective sequencing of cDNA clones from created library was performed. The sequences of cDNA clones were identified with usage of BLAST-search. The results of cDNA library analysis and selective sequencing represents prove good functionality and full length character of inserted cDNA clones. Obtained cDNA library from meristematic tissue of finger millet panicle represents good and valuable source for isolation and identification of key genes regulating metabolism and meristematic development and for mining of new molecular markers to conduct out high quality genetic investigations and molecular breeding as well.
Computers in Academic Architecture Libraries.
ERIC Educational Resources Information Center
Willis, Alfred; And Others
1992-01-01
Computers are widely used in architectural research and teaching in U.S. schools of architecture. A survey of libraries serving these schools sought information on the emphasis placed on computers by the architectural curriculum, accessibility of computers to library staff, and accessibility of computers to library patrons. Survey results and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
The Use of EST Expression Matrixes for the Quality Control of Gene Expression Data
Milnthorpe, Andrew T.; Soloviev, Mikhail
2012-01-01
EST expression profiling provides an attractive tool for studying differential gene expression, but cDNA libraries' origins and EST data quality are not always known or reported. Libraries may originate from pooled or mixed tissues; EST clustering, EST counts, library annotations and analysis algorithms may contain errors. Traditional data analysis methods, including research into tissue-specific gene expression, assume EST counts to be correct and libraries to be correctly annotated, which is not always the case. Therefore, a method capable of assessing the quality of expression data based on that data alone would be invaluable for assessing the quality of EST data and determining their suitability for mRNA expression analysis. Here we report an approach to the selection of a small generic subset of 244 UniGene clusters suitable for identification of the tissue of origin for EST libraries and quality control of the expression data using EST expression information alone. We created a small expression matrix of UniGene IDs using two rounds of selection followed by two rounds of optimisation. Our selection procedures differ from traditional approaches to finding “tissue-specific” genes and our matrix yields consistency high positive correlation values for libraries with confirmed tissues of origin and can be applied for tissue typing and quality control of libraries as small as just a few hundred total ESTs. Furthermore, we can pick up tissue correlations between related tissues e.g. brain and peripheral nervous tissue, heart and muscle tissues and identify tissue origins for a few libraries of uncharacterised tissue identity. It was possible to confirm tissue identity for some libraries which have been derived from cancer tissues or have been normalised. Tissue matching is affected strongly by cancer progression or library normalisation and our approach may potentially be applied for elucidating the stage of normalisation in normalised libraries or for cancer staging. PMID:22412959
ERIC Educational Resources Information Center
Liberman, Eva; And Others
Many library operations involving large data banks lend themselves readily to computer operation. In setting up library computer programs, in changing or expanding programs, cost in programming and time delays could be substantially reduced if the programmers had access to library computer programs being used by other libraries, providing similar…
Cao, Shuanghe; Siriwardana, Chamindika L; Kumimoto, Roderick W; Holt, Ben F
2011-05-19
Monocots, especially the temperate grasses, represent some of the most agriculturally important crops for both current food needs and future biofuel development. Because most of the agriculturally important grass species are difficult to study (e.g., they often have large, repetitive genomes and can be difficult to grow in laboratory settings), developing genetically tractable model systems is essential. Brachypodium distachyon (hereafter Brachypodium) is an emerging model system for the temperate grasses. To fully realize the potential of this model system, publicly accessible discovery tools are essential. High quality cDNA libraries that can be readily adapted for multiple downstream purposes are a needed resource. Additionally, yeast two-hybrid (Y2H) libraries are an important discovery tool for protein-protein interactions and are not currently available for Brachypodium. We describe the creation of two high quality, publicly available Gateway™ cDNA entry libraries and their derived Y2H libraries for Brachypodium. The first entry library represents cloned cDNA populations from both short day (SD, 8/16-h light/dark) and long day (LD, 20/4-h light/dark) grown plants, while the second library was generated from hormone treated tissues. Both libraries have extensive genome coverage (~5 × 107 primary clones each) and average clone lengths of ~1.5 Kb. These entry libraries were then used to create two recombination-derived Y2H libraries. Initial proof-of-concept screens demonstrated that a protein with known interaction partners could readily re-isolate those partners, as well as novel interactors. Accessible community resources are a hallmark of successful biological model systems. Brachypodium has the potential to be a broadly useful model system for the grasses, but still requires many of these resources. The Gateway™ compatible entry libraries created here will facilitate studies for multiple user-defined purposes and the derived Y2H libraries can be immediately applied to large scale screening and discovery of novel protein-protein interactions. All libraries are freely available for distribution to the research community.
Damsel: A Data Model Storage Library for Exascale Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, Alok; Liao, Wei-keng
Computational science applications have been described as having one of seven motifs (the “seven dwarfs”), each having a particular pattern of computation and communication. From a storage and I/O perspective, these applications can also be grouped into a number of data model motifs describing the way data is organized and accessed during simulation, analysis, and visualization. Major storage data models developed in the 1990s, such as Network Common Data Format (netCDF) and Hierarchical Data Format (HDF) projects, created support for more complex data models. Development of both netCDF and HDF5 was influenced by multi-dimensional dataset storage requirements, but their accessmore » models and formats were designed with sequential storage in mind (e.g., a POSIX I/O model). Although these and other high-level I/O libraries have had a beneficial impact on large parallel applications, they do not always attain a high percentage of peak I/O performance due to fundamental design limitations, and they do not address the full range of current and future computational science data models. The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. The project consists of three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community. The product of this project, Damsel library, is openly available for download from http://cucis.ece.northwestern.edu/projects/DAMSEL. Several case studies and application programming interface reference are also available to assist new users to learn to use the library.« less
MARC and the Library Service Center: Automation at Bargain Rates.
ERIC Educational Resources Information Center
Pearson, Karl M.
Despite recent research and development in the field of library automation, libraries have been unable to reap the benefits promised by technology due to the high cost of building and maintaining their own computer-based systems. Time-sharing and disc mass storage devices will bring automation costs, if spread over a number of users, within the…
Introducing the National Library for Health Skin Conditions Specialist Library.
Grindlay, Douglas; Boulos, Maged N Kamel; Williams, Hywel C
2005-04-26
This paper introduces the new National Library for Health Skin Conditions Specialist Library http://www.library.nhs.uk/skin. The aims, scope and audience of the new NLH Skin Conditions Specialist Library, and the composition and functions of its core Project Team, Editorial Team and Stakeholders Group are described. The Library's collection building strategy, resource and information types, editorial policies, quality checklist, taxonomy for content indexing, organisation and navigation, and user interface are all presented in detail. The paper also explores the expected impact and utility of the new Library, as well as some possible future directions for further development. The Skin Conditions Specialist Library is not just another new Web site that dermatologists might want to add to their Internet favourites then forget about it. It is intended to be a practical, "one-stop shop" dermatology information service for everyday practical use, offering high quality, up-to-date resources, and adopting robust evidence-based and knowledge management approaches.
Computer Output Microfilm and Library Catalogs.
ERIC Educational Resources Information Center
Meyer, Richard W.
Early computers dealt with mathematical and scientific problems requiring very little input and not much output, therefore high speed printing devices were not required. Today with increased variety of use, high speed printing is necessary and Computer Output Microfilm (COM) devices have been created to meet this need. This indirect process can…
libgapmis: extending short-read alignments
2013-01-01
Background A wide variety of short-read alignment programmes have been published recently to tackle the problem of mapping millions of short reads to a reference genome, focusing on different aspects of the procedure such as time and memory efficiency, sensitivity, and accuracy. These tools allow for a small number of mismatches in the alignment; however, their ability to allow for gaps varies greatly, with many performing poorly or not allowing them at all. The seed-and-extend strategy is applied in most short-read alignment programmes. After aligning a substring of the reference sequence against the high-quality prefix of a short read--the seed--an important problem is to find the best possible alignment between a substring of the reference sequence succeeding and the remaining suffix of low quality of the read--extend. The fact that the reads are rather short and that the gap occurrence frequency observed in various studies is rather low suggest that aligning (parts of) those reads with a single gap is in fact desirable. Results In this article, we present libgapmis, a library for extending pairwise short-read alignments. Apart from the standard CPU version, it includes ultrafast SSE- and GPU-based implementations. libgapmis is based on an algorithm computing a modified version of the traditional dynamic-programming matrix for sequence alignment. Extensive experimental results demonstrate that the functions of the CPU version provided in this library accelerate the computations by a factor of 20 compared to other programmes. The analogous SSE- and GPU-based implementations accelerate the computations by a factor of 6 and 11, respectively, compared to the CPU version. The library also provides the user the flexibility to split the read into fragments, based on the observed gap occurrence frequency and the length of the read, thereby allowing for a variable, but bounded, number of gaps in the alignment. Conclusions We present libgapmis, a library for extending pairwise short-read alignments. We show that libgapmis is better-suited and more efficient than existing algorithms for this task. The importance of our contribution is underlined by the fact that the provided functions may be seamlessly integrated into any short-read alignment pipeline. The open-source code of libgapmis is available at http://www.exelixis-lab.org/gapmis. PMID:24564250
A Strassen-Newton algorithm for high-speed parallelizable matrix inversion
NASA Technical Reports Server (NTRS)
Bailey, David H.; Ferguson, Helaman R. P.
1988-01-01
Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.
LeProust, Emily M.; Peck, Bill J.; Spirin, Konstantin; McCuen, Heather Brummel; Moore, Bridget; Namsaraev, Eugeni; Caruthers, Marvin H.
2010-01-01
We have achieved the ability to synthesize thousands of unique, long oligonucleotides (150mers) in fmol amounts using parallel synthesis of DNA on microarrays. The sequence accuracy of the oligonucleotides in such large-scale syntheses has been limited by the yields and side reactions of the DNA synthesis process used. While there has been significant demand for libraries of long oligos (150mer and more), the yields in conventional DNA synthesis and the associated side reactions have previously limited the availability of oligonucleotide pools to lengths <100 nt. Using novel array based depurination assays, we show that the depurination side reaction is the limiting factor for the synthesis of libraries of long oligonucleotides on Agilent Technologies’ SurePrint® DNA microarray platform. We also demonstrate how depurination can be controlled and reduced by a novel detritylation process to enable the synthesis of high quality, long (150mer) oligonucleotide libraries and we report the characterization of synthesis efficiency for such libraries. Oligonucleotide libraries prepared with this method have changed the economics and availability of several existing applications (e.g. targeted resequencing, preparation of shRNA libraries, site-directed mutagenesis), and have the potential to enable even more novel applications (e.g. high-complexity synthetic biology). PMID:20308161
ERIC Educational Resources Information Center
Library Journal, 1985
1985-01-01
This special supplement to "Library Journal" and "School Library Journal" includes articles on technological dependency, promise of computers for reluctant readers, copyright and database downloading, access to neighborhood of Mister Rogers, library acquisitions, circulating personal computers, "microcomputeritis,"…
ERIC Educational Resources Information Center
Weber, Jonathan
2006-01-01
Creating a digital library might seem like a task best left to a large research collection with a vast staff and generous budget. However, tools for successfully creating digital libraries are getting easier to use all the time. The explosion of people creating content for the web has led to the availability of many high-quality applications and…
High-throughput continuous hydrothermal synthesis of an entire nanoceramic phase diagram.
Weng, Xiaole; Cockcroft, Jeremy K; Hyett, Geoffrey; Vickers, Martin; Boldrin, Paul; Tang, Chiu C; Thompson, Stephen P; Parker, Julia E; Knowles, Jonathan C; Rehman, Ihtesham; Parkin, Ivan; Evans, Julian R G; Darr, Jawwad A
2009-01-01
A novel High-Throughput Continuous Hydrothermal (HiTCH) flow synthesis reactor was used to make directly and rapidly a 66-sample nanoparticle library (entire phase diagram) of nanocrystalline Ce(x)Zr(y)Y(z)O(2-delta) in less than 12 h. High resolution PXRD data were obtained for the entire heat-treated library (at 1000 degrees C/1 h) in less than a day using the new robotic beamline I11, located at Diamond Light Source (DLS). This allowed Rietveld-quality powder X-ray diffraction (PXRD) data collection of the entire 66-sample library in <1 day. Consequently, the authors rapidly mapped out phase behavior and sintering behaviors for the entire library. Out of the entire 66-sample heat-treated library, the PXRD data suggests that 43 possess the fluorite structure, of which 30 (out of 36) are ternary compositions. The speed, quantity and quality of data obtained by our new approach, offers an exciting new development which will allow structure-property relationships to be accessed for nanoceramics in much shorter time periods.
Second-generation DNA-templated macrocycle libraries for the discovery of bioactive small molecules.
Usanov, Dmitry L; Chan, Alix I; Maianti, Juan Pablo; Liu, David R
2018-07-01
DNA-encoded libraries have emerged as a widely used resource for the discovery of bioactive small molecules, and offer substantial advantages compared with conventional small-molecule libraries. Here, we have developed and streamlined multiple fundamental aspects of DNA-encoded and DNA-templated library synthesis methodology, including computational identification and experimental validation of a 20 × 20 × 20 × 80 set of orthogonal codons, chemical and computational tools for enhancing the structural diversity and drug-likeness of library members, a highly efficient polymerase-mediated template library assembly strategy, and library isolation and purification methods. We have integrated these improved methods to produce a second-generation DNA-templated library of 256,000 small-molecule macrocycles with improved drug-like physical properties. In vitro selection of this library for insulin-degrading enzyme affinity resulted in novel insulin-degrading enzyme inhibitors, including one of unusual potency and novel macrocycle stereochemistry (IC 50 = 40 nM). Collectively, these developments enable DNA-templated small-molecule libraries to serve as more powerful, accessible, streamlined and cost-effective tools for bioactive small-molecule discovery.
Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio
2017-10-24
High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.
Implementationof a modular software system for multiphysical processes in porous media
NASA Astrophysics Data System (ADS)
Naumov, Dmitri; Watanabe, Norihiro; Bilke, Lars; Fischer, Thomas; Lehmann, Christoph; Rink, Karsten; Walther, Marc; Wang, Wenqing; Kolditz, Olaf
2016-04-01
Subsurface georeservoirs are a candidate technology for large scale energy storage required as part of the transition to renewable energy sources. The increased use of the subsurface results in competing interests and possible impacts on protected entities. To optimize and plan the use of the subsurface in large scale scenario analyses,powerful numerical frameworks are required that aid process understanding and can capture the coupled thermal (T), hydraulic (H), mechanical (M), and chemical (C) processes with high computational efficiency. Due to having a multitude of different couplings between basic T, H, M, or C processes and the necessity to implement new numerical schemes the development focus has moved to software's modularity. The decreased coupling between the components results in two major advantages: easier addition of specialized processes and improvement of the code's testability and therefore its quality. The idea of modularization is implemented on several levels, in addition to library based separation of the previous code version, by using generalized algorithms available in the Standard Template Library and the Boost library, relying on efficient implementations of liner algebra solvers, using concepts when designing new types, and localization of frequently accessed data structures. This procedure shows certain benefits for a flexible high-performance framework applied to the analysis of multipurpose georeservoirs.
Unaffiliated Users' Access to Academic Libraries: A Survey.
ERIC Educational Resources Information Center
Courtney, Nancy
2003-01-01
Most of 814 academic libraries surveyed allow onsite access to unaffiliated users, and many give borrowing privileges to certain categories of users. Use of library computers to access library resources and other computer applications is commonly allowed although authentication on library computers is increasing. Five tables show statistics.…
Library and Classroom Use of Copyrighted Videotapes and Computer Software.
ERIC Educational Resources Information Center
Reed, Mary Hutchings; Stanek, Debra
1986-01-01
This pullout guide addresses issues regarding library use of copyrighted videotapes and computer software. Guidelines for videotapes cover in-classroom use, in-library use in public library, loan, and duplication. Guidelines for computer software cover purchase conditions, avoiding license restrictions, loaning, archival copies, and in-library and…
Introduction to Minicomputers in Federal Libraries.
ERIC Educational Resources Information Center
Young, Micki Jo; And Others
This book for library administrators and Federal library staff covers the application of minicomputers in Federal libraries and offers a review of minicomputer technology. A brief overview of automation explains computer technology, hardware, and software. The role of computers in libraries is examined in terms of the history of computers and…
Avogadro: an advanced semantic chemical editor, visualization, and analysis platform
2012-01-01
Background The Avogadro project has developed an advanced molecule editor and visualizer designed for cross-platform use in computational chemistry, molecular modeling, bioinformatics, materials science, and related areas. It offers flexible, high quality rendering, and a powerful plugin architecture. Typical uses include building molecular structures, formatting input files, and analyzing output of a wide variety of computational chemistry packages. By using the CML file format as its native document type, Avogadro seeks to enhance the semantic accessibility of chemical data types. Results The work presented here details the Avogadro library, which is a framework providing a code library and application programming interface (API) with three-dimensional visualization capabilities; and has direct applications to research and education in the fields of chemistry, physics, materials science, and biology. The Avogadro application provides a rich graphical interface using dynamically loaded plugins through the library itself. The application and library can each be extended by implementing a plugin module in C++ or Python to explore different visualization techniques, build/manipulate molecular structures, and interact with other programs. We describe some example extensions, one which uses a genetic algorithm to find stable crystal structures, and one which interfaces with the PackMol program to create packed, solvated structures for molecular dynamics simulations. The 1.0 release series of Avogadro is the main focus of the results discussed here. Conclusions Avogadro offers a semantic chemical builder and platform for visualization and analysis. For users, it offers an easy-to-use builder, integrated support for downloading from common databases such as PubChem and the Protein Data Bank, extracting chemical data from a wide variety of formats, including computational chemistry output, and native, semantic support for the CML file format. For developers, it can be easily extended via a powerful plugin mechanism to support new features in organic chemistry, inorganic complexes, drug design, materials, biomolecules, and simulations. Avogadro is freely available under an open-source license from http://avogadro.openmolecules.net. PMID:22889332
Avogadro: an advanced semantic chemical editor, visualization, and analysis platform.
Hanwell, Marcus D; Curtis, Donald E; Lonie, David C; Vandermeersch, Tim; Zurek, Eva; Hutchison, Geoffrey R
2012-08-13
The Avogadro project has developed an advanced molecule editor and visualizer designed for cross-platform use in computational chemistry, molecular modeling, bioinformatics, materials science, and related areas. It offers flexible, high quality rendering, and a powerful plugin architecture. Typical uses include building molecular structures, formatting input files, and analyzing output of a wide variety of computational chemistry packages. By using the CML file format as its native document type, Avogadro seeks to enhance the semantic accessibility of chemical data types. The work presented here details the Avogadro library, which is a framework providing a code library and application programming interface (API) with three-dimensional visualization capabilities; and has direct applications to research and education in the fields of chemistry, physics, materials science, and biology. The Avogadro application provides a rich graphical interface using dynamically loaded plugins through the library itself. The application and library can each be extended by implementing a plugin module in C++ or Python to explore different visualization techniques, build/manipulate molecular structures, and interact with other programs. We describe some example extensions, one which uses a genetic algorithm to find stable crystal structures, and one which interfaces with the PackMol program to create packed, solvated structures for molecular dynamics simulations. The 1.0 release series of Avogadro is the main focus of the results discussed here. Avogadro offers a semantic chemical builder and platform for visualization and analysis. For users, it offers an easy-to-use builder, integrated support for downloading from common databases such as PubChem and the Protein Data Bank, extracting chemical data from a wide variety of formats, including computational chemistry output, and native, semantic support for the CML file format. For developers, it can be easily extended via a powerful plugin mechanism to support new features in organic chemistry, inorganic complexes, drug design, materials, biomolecules, and simulations. Avogadro is freely available under an open-source license from http://avogadro.openmolecules.net.
Verhoeven, Joost Theo Petra; Canuti, Marta; Munro, Hannah J; Dufour, Suzanne C; Lang, Andrew S
2018-04-19
High-throughput sequencing (HTS) technologies are becoming increasingly important within microbiology research, but aspects of library preparation, such as high cost per sample or strict input requirements, make HTS difficult to implement in some niche applications and for research groups on a budget. To answer these necessities, we developed ViDiT, a customizable, PCR-based, extremely low-cost (<5 US dollars per sample) and versatile library preparation method, and CACTUS, an analysis pipeline designed to rely on cloud computing power to generate high-quality data from ViDiT-based experiments without the need of expensive servers. We demonstrate here the versatility and utility of these methods within three fields of microbiology: virus discovery, amplicon-based viral genome sequencing and microbiome profiling. ViDiT-CACTUS allowed the identification of viral fragments from 25 different viral families from 36 oropharyngeal-cloacal swabs collected from wild birds, the sequencing of three almost complete genomes of avian influenza A viruses (>90% coverage), and the characterization and functional profiling of the complete microbial diversity (bacteria, archaea, viruses) within a deep-sea carnivorous sponge. ViDiT-CACTUS demonstrated its validity in a wide range of microbiology applications and its simplicity and modularity make it easily implementable in any molecular biology laboratory, towards various research goals.
The Gender and Science Digital Library: Affecting Student Achievement in Science.
ERIC Educational Resources Information Center
Nair, Sarita
2003-01-01
Describes the Gender and Science Digital Library (GSDL), an online collection of high-quality, interactive science resources that are gender-fair, inclusive, and engaging to students. Considers use by teachers and school library media specialists to encourage girls to enter careers in science, technology, engineering, and math (STEM). (LRW)
Introducing the National Library for Health Skin Conditions Specialist Library
Grindlay, Douglas; Boulos, Maged N Kamel; Williams, Hywel C
2005-01-01
Background This paper introduces the new National Library for Health Skin Conditions Specialist Library . Description The aims, scope and audience of the new NLH Skin Conditions Specialist Library, and the composition and functions of its core Project Team, Editorial Team and Stakeholders Group are described. The Library's collection building strategy, resource and information types, editorial policies, quality checklist, taxonomy for content indexing, organisation and navigation, and user interface are all presented in detail. The paper also explores the expected impact and utility of the new Library, as well as some possible future directions for further development. Conclusion The Skin Conditions Specialist Library is not just another new Web site that dermatologists might want to add to their Internet favourites then forget about it. It is intended to be a practical, "one-stop shop" dermatology information service for everyday practical use, offering high quality, up-to-date resources, and adopting robust evidence-based and knowledge management approaches. PMID:15854224
Increasing Chemical Space Coverage by Combining Empirical and Computational Fragment Screens
2015-01-01
Most libraries for fragment-based drug discovery are restricted to 1,000–10,000 compounds, but over 500,000 fragments are commercially available and potentially accessible by virtual screening. Whether this larger set would increase chemotype coverage, and whether a computational screen can pragmatically prioritize them, is debated. To investigate this question, a 1281-fragment library was screened by nuclear magnetic resonance (NMR) against AmpC β-lactamase, and hits were confirmed by surface plasmon resonance (SPR). Nine hits with novel chemotypes were confirmed biochemically with KI values from 0.2 to low mM. We also computationally docked 290,000 purchasable fragments with chemotypes unrepresented in the empirical library, finding 10 that had KI values from 0.03 to low mM. Though less novel than those discovered by NMR, the docking-derived fragments filled chemotype holes from the empirical library. Crystal structures of nine of the fragments in complex with AmpC β-lactamase revealed new binding sites and explained the relatively high affinity of the docking-derived fragments. The existence of chemotype holes is likely a general feature of fragment libraries, as calculation suggests that to represent the fragment substructures of even known biogenic molecules would demand a library of minimally over 32,000 fragments. Combining computational and empirical fragment screens enables the discovery of unexpected chemotypes, here by the NMR screen, while capturing chemotypes missing from the empirical library and tailored to the target, with little extra cost in resources. PMID:24807704
High-performance computing — an overview
NASA Astrophysics Data System (ADS)
Marksteiner, Peter
1996-08-01
An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.
Electronic Technologies and Preservation.
ERIC Educational Resources Information Center
Waters, Donald J.
Digital imaging technology, which is used to take a computer picture of documents at the page level, has significant potential as a tool for preserving deteriorating library materials. Multiple reproductions can be made without loss of quality; the end product is compact; reproductions can be made in paper, microfilm, or CD-ROM; and access over…
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
Do quality improvement systems improve health library services? A systematic review.
Gray, Hannah; Sutton, Gary; Treadway, Victoria
2012-09-01
A turbulent financial and political climate requires health libraries to be more accountable than ever. Quality improvement systems are widely considered a 'good thing to do', but do they produce useful outcomes that can demonstrate value? To undertake a systematic review to identify which aspects of health libraries are being measured for quality, what tools are being used and what outcomes are reported following utilisation of quality improvement systems. Many health libraries utilise quality improvement systems without translating the data into service improvements. Included studies demonstrate that quality improvement systems produce valuable outcomes including a positive impact on strategic planning, promotion, new and improved services and staff development. No impact of quality improvement systems on library users or patients is reported in the literature. The literature in this area is sparse and requires updating. We recommend further primary research is conducted in health libraries focusing upon the outcomes of utilising quality improvement systems. An exploration of quality improvement systems in other library sectors may also provide valuable insight for health libraries. © 2012 The authors. Health Information and Libraries Journal © 2012 Health Libraries Group.
Zhao, Wei; Li, Xin; Liu, Wen-Hui; Zhao, Jian; Jin, Yi-Ming; Sui, Ting-Ting
2014-09-01
Human epithelial colorectal adenocarcinoma (Caco-2) cells are widely used as an in vitro model of the human small intestinal mucosa. Caco-2 cells are host cells of the human astrovirus (HAstV) and other enteroviruses. High quality cDNA libraries are pertinent resources and critical tools for protein-protein interaction research, but are currently unavailable for Caco-2 cells. To construct a three-open reading frame, full length-expression cDNA library from the Caco-2 cell line for application to HAstV protein-protein interaction screening, total RNA was extracted from Caco-2 cells. The switching mechanism at the 5' end of the RNA transcript technique was used for cDNA synthesis. Double-stranded cDNA was digested by Sfi I and ligated to reconstruct a pGADT7-Sfi I three-frame vector. The ligation mixture was transformed into Escherichia coli HST08 premium electro cells by electroporation to construct the primary cDNA library. The library capacity was 1.0×10(6)clones. Gel electrophoresis results indicated that the fragments ranged from 0.5kb to 4.2kb. Randomly picked clones show that the recombination rate was 100%. The three-frame primary cDNA library plasmid mixture (5×10(5)cfu) was also transformed into E. coli HST08 premium electro cells, and all clones were harvested to amplify the cDNA library. To detect the sufficiency of the cDNA library, HAstV capsid protein as bait was screened and tested against the Caco-2 cDNA library by a yeast two-hybrid (Y2H) system. A total of 20 proteins were found to interact with the capsid protein. These results showed that a high-quality three-frame cDNA library from Caco-2 cells was successfully constructed. This library was efficient for the application to the Y2H system, and could be used for future research. Copyright © 2014 Elsevier B.V. All rights reserved.
Gonzalez, Nestor R; Dusick, Joshua R; Martin, Neil A
2012-07-01
Changes in neurosurgical practice and graduate medical education impose new challenges for training programs. We present our experience providing neurosurgical residents with digital and mobile educational resources in support of the departmental academic activities. A weekly mandatory conference program for all clinical residents based on the Accreditation Council for Graduate Medical Education competencies, held in protected time, was introduced. Topics were taught through didactic sessions and case discussions. Faculty and residents prepare high-quality presentations, equivalent to peer-review leading papers or case reports. Presentations are videorecorded, stored in a digital library, and broadcasted through our Website and iTunes U. Residents received mobile tablet devices with remote access to the digital library, applications for document/video management, and interactive teaching tools. Residents responded to an anonymous survey, and performances on the Self-Assessment in Neurological Surgery examination before and after the intervention were compared. Ninety-two percent reported increased time used to study outside the hospital and attributed the habit change to the introduction of mobile devices; 67% used the electronic tablets as the primary tool to access the digital library, followed by 17% hospital computers, 8% home computers, and 8% personal laptops. Forty-two percent have submitted operative videos, cases, and documents to the library. One year after introducing the program, results of the Congress of Neurological Surgeons-Self-Assessment in Neurological Surgery examination showed a statistically significant improvement in global scoring and improvement in 16 of the 18 individual areas evaluated, 6 of which reached statistical significance. A structured, competency-based neurosurgical education program supported with digital and mobile resources improved reading habits among residents and performance on the Congress of Neurological Surgeons-Self-Assessment in Neurological Surgery examination.
1981-11-30
COMPUTER PROGRAM USER’S MANUAL FOR FIREFINDER DIGITAL TOPOGRAPHIC DATA VERIFICATION LIBRARY DUBBING SYSTEM 30 NOVEMBER 1981 by: Marie Ceres Leslie R...Library .............................. 1-2 1.2.3 Dubbing .......................... 1-2 1.3 Library Process Overview ..................... 1-3 2 LIBRARY...RPOSE AND SCOPE This manual describes the computer programs for the FIREFINDER Digital Topographic Data Veri fication-Library- Dubbing System (FFDTDVLDS
Appropriate Use Policies for Computers in College/University Libraries. CLIP Note.
ERIC Educational Resources Information Center
Tuten, Jane, Comp.; Junker, Karen, Comp.
The purpose of this College Library Information Packet (CLIP) Note is to help libraries identify desirable elements found in computer use policies and to provide guidelines for college and small university libraries that want to develop policies or have been directed to implement policies for computer usage in their libraries. In January 2001, a…
Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H
2013-08-01
Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.
A standard library for modeling satellite orbits on a microcomputer
NASA Astrophysics Data System (ADS)
Beutel, Kenneth L.
1988-03-01
Introductory students of astrodynamics and the space environment are required to have a fundamental understanding of the kinematic behavior of satellite orbits. This thesis develops a standard library that contains the basic formulas for modeling earth orbiting satellites. This library is used as a basis for implementing a satellite motion simulator that can be used to demonstrate orbital phenomena in the classroom. Surveyed are the equations of orbital elements, coordinate systems and analytic formulas, which are made into a standard method for modeling earth orbiting satellites. The standard library is written in the C programming language and is designed to be highly portable between a variety of computer environments. The simulation draws heavily on the standards established by the library to produce a graphics-based orbit simulation program written for the Apple Macintosh computer. The simulation demonstrates the utility of the standard library functions but, because of its extensive use of the Macintosh user interface, is not portable to other operating systems.
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diachin, L F; Garaizar, F X; Henson, V E
2009-10-12
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less
A Collaborative Inquiry into Museum and Library Early Learning Services
ERIC Educational Resources Information Center
Sirinides, Phil; Fink, Ryan; DuBois, Tesla
2016-01-01
As states, cities, and communities take a more active role in ensuring that all children have access to high quality experiences and opportunities to learn, many are looking to museums and libraries as part of the early childhood education system. Museums and libraries can play a critical role in these efforts, and there is clear momentum and…
Efficient preparation of shuffled DNA libraries through recombination (Gateway) cloning.
Lehtonen, Soili I; Taskinen, Barbara; Ojala, Elina; Kukkurainen, Sampo; Rahikainen, Rolle; Riihimäki, Tiina A; Laitinen, Olli H; Kulomaa, Markku S; Hytönen, Vesa P
2015-01-01
Efficient and robust subcloning is essential for the construction of high-diversity DNA libraries in the field of directed evolution. We have developed a more efficient method for the subcloning of DNA-shuffled libraries by employing recombination cloning (Gateway). The Gateway cloning procedure was performed directly after the gene reassembly reaction, without additional purification and amplification steps, thus simplifying the conventional DNA shuffling protocols. Recombination-based cloning, directly from the heterologous reassembly reaction, conserved the high quality of the library and reduced the time required for the library construction. The described method is generally compatible for the construction of DNA-shuffled gene libraries. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Common Graphics Library (CGL). Volume 2: Low-level user's guide
NASA Technical Reports Server (NTRS)
Taylor, Nancy L.; Hammond, Dana P.; Theophilos, Pauline M.
1989-01-01
The intent is to instruct the users of the Low-Level routines of the Common Graphics Library (CGL). The Low-Level routines form an application-independent graphics package enabling the user community to construct and design scientific charts conforming to the publication and/or viewgraph process. The Low-Level routines allow the user to design unique or unusual report-quality charts from a set of graphics utilities. The features of these routines can be used stand-alone or in conjunction with other packages to enhance or augment their capabilities. This library is written in ANSI FORTRAN 77, and currently uses a CORE-based underlying graphics package, and is therefore machine-independent, providing support for centralized and/or distributed computer systems.
ERIC Educational Resources Information Center
Breeding, Marshall
1998-01-01
The Online Computer Library Center's (OCLC) access options have kept pace with the evolving trends in telecommunications and the library computing environment. As libraries deploy microcomputers and develop networks, OCLC offers access methods consistent with these environments. OCLC works toward reorienting its network paradigm through TCP/IP…
Rugged: an operational, open-source solution for Sentinel-2 mapping
NASA Astrophysics Data System (ADS)
Maisonobe, Luc; Seyral, Jean; Prat, Guylaine; Guinet, Jonathan; Espesset, Aude
2015-10-01
When you map the entire Earth every 5 days with the aim of generating high-quality time series over land, there is no room for geometrical error: the algorithms have to be stable, reliable, and precise. Rugged, a new open-source library for pixel geolocation, is at the geometrical heart of the operational processing for Sentinel-2. Rugged performs sensor-to-terrain mapping taking into account ground Digital Elevation Models, Earth rotation with all its small irregularities, on-board sensor pixel individual lines-of-sight, spacecraft motion and attitude, and all significant physical effects. It provides direct and inverse location, i.e. it allows the accurate computation of which ground point is viewed from a specific pixel in a spacecraft instrument, and conversely which pixel will view a specified ground point. Direct and inverse location can be used to perform full ortho-rectification of images and correlation between sensors observing the same area. Implemented as an add-on for Orekit (Orbits Extrapolation KIT; a low-level space dynamics library), Rugged also offers the possibility of simulating satellite motion and attitude auxiliary data using Orekit's full orbit propagation capability. This is a considerable advantage for test data generation and mission simulation activities. Together with the Orfeo ToolBox (OTB) image processing library, Rugged provides the algorithmic core of Sentinel-2 Instrument Processing Facilities. The S2 complex viewing model - with 12 staggered push-broom detectors and 13 spectral bands - is built using Rugged objects, enabling the computation of rectification grids for mapping between cartographic and focal plane coordinates. These grids are passed to the OTB library for further image resampling, thus completing the ortho-rectification chain. Sentinel-2 stringent operational requirements to process several terabytes of data per week represented a tough challenge, though one that was well met by Rugged in terms of the robustness and performance of the library.
Cao, Mingshu; Fraser, Karl; Rasmussen, Susanne
2013-10-31
Mass spectrometry coupled with chromatography has become the major technical platform in metabolomics. Aided by peak detection algorithms, the detected signals are characterized by mass-over-charge ratio (m/z) and retention time. Chemical identities often remain elusive for the majority of the signals. Multi-stage mass spectrometry based on electrospray ionization (ESI) allows collision-induced dissociation (CID) fragmentation of selected precursor ions. These fragment ions can assist in structural inference for metabolites of low molecular weight. Computational investigations of fragmentation spectra have increasingly received attention in metabolomics and various public databases house such data. We have developed an R package "iontree" that can capture, store and analyze MS2 and MS3 mass spectral data from high throughput metabolomics experiments. The package includes functions for ion tree construction, an algorithm (distMS2) for MS2 spectral comparison, and tools for building platform-independent ion tree (MS2/MS3) libraries. We have demonstrated the utilization of the package for the systematic analysis and annotation of fragmentation spectra collected in various metabolomics platforms, including direct infusion mass spectrometry, and liquid chromatography coupled with either low resolution or high resolution mass spectrometry. Assisted by the developed computational tools, we have demonstrated that spectral trees can provide informative evidence complementary to retention time and accurate mass to aid with annotating unknown peaks. These experimental spectral trees once subjected to a quality control process, can be used for querying public MS2 databases or de novo interpretation. The putatively annotated spectral trees can be readily incorporated into reference libraries for routine identification of metabolites.
Construction of CRISPR Libraries for Functional Screening.
Carstens, Carsten P; Felts, Katherine A; Johns, Sarah E
2018-01-01
Identification of gene function has been aided by the ability to generate targeted gene knockouts or transcriptional repression using the CRISPR/CAS9 system. Using pooled libraries of guide RNA expression vectors that direct CAS9 to a specific genomic site allows identification of genes that are either enriched or depleted in response to a selection scheme, thus linking the affected gene to the chosen phenotype. The quality of the data generated by the screening is dependent on the quality of the guide RNA delivery library with regards to error rates and especially evenness of distribution of the guides. Here, we describe a method for constructing complex plasmid libraries based on pooled designed oligomers with high representation and tight distributions. The procedure allows construction of plasmid libraries of >60,000 members with a 95th/5th percentile ratio of less than 3.5.
Gao, Jin-Xin; Jing, Jing; Yu, Chuan-Jin; Chen, Jie
2015-06-01
Curvularia lunata is an important maize foliar fungal pathogen that distributes widely in maize growing area in China, and several key pathogenic factors have been isolated. An yeast two-hybrid (Y2H) library is a very useful platform to further unravel novel pathogenic factors in C. lunata. To construct a high-quality full length-expression cDNA library from the C. lunata for application to pathogenesis-related protein-protein interaction screening, total RNA was extracted. The SMART (Switching Mechanism At 5' end of the RNA Transcript) technique was used for cDNA synthesis. Double-stranded cDNA was ligated into the pGADT7-Rec vector with Herring Testes Carrier DNA using homologous recombination method. The ligation mixture was transformed into competent yeast AH109 cells to construct the primary cDNA library. Eventually, a high qualitative library was successfully established according to an evaluation on quality. The transformation efficiency was about 6.39 ×10(5) transformants/3 μg pGADT7-Rec. The titer of the primary cDNA library was 2.5×10(8) cfu/mL. The numbers for the cDNA library was 2.46×10(5). Randomly picked clones show that the recombination rate was 88.24%. Gel electrophoresis results indicated that the fragments ranged from 0.4 kb to 3.0 kb. Melanin synthesis protein Brn1 (1,3,8-hydroxynaphthalene reductase) was used as a "bait" to test the sufficiency of the Y2H library. As a result, a cDNA clone encoding VelB protein that was known to be involved in the regulation of diverse cellular processes, including control of secondary metabolism containing melanin and toxin production in many filamentous fungi was identified. Further study on the exact role of the VelB gene is underway.
RNA-SeQC: RNA-seq metrics for quality control and process optimization.
DeLuca, David S; Levin, Joshua Z; Sivachenko, Andrey; Fennell, Timothy; Nazaire, Marc-Danie; Williams, Chris; Reich, Michael; Winckler, Wendy; Getz, Gad
2012-06-01
RNA-seq, the application of next-generation sequencing to RNA, provides transcriptome-wide characterization of cellular activity. Assessment of sequencing performance and library quality is critical to the interpretation of RNA-seq data, yet few tools exist to address this issue. We introduce RNA-SeQC, a program which provides key measures of data quality. These metrics include yield, alignment and duplication rates; GC bias, rRNA content, regions of alignment (exon, intron and intragenic), continuity of coverage, 3'/5' bias and count of detectable transcripts, among others. The software provides multi-sample evaluation of library construction protocols, input materials and other experimental parameters. The modularity of the software enables pipeline integration and the routine monitoring of key measures of data quality such as the number of alignable reads, duplication rates and rRNA contamination. RNA-SeQC allows investigators to make informed decisions about sample inclusion in downstream analysis. In summary, RNA-SeQC provides quality control measures critical to experiment design, process optimization and downstream computational analysis. See www.genepattern.org to run online, or www.broadinstitute.org/rna-seqc/ for a command line tool.
ParallABEL: an R library for generalized parallelization of genome-wide association studies.
Sangket, Unitsa; Mahasirimongkol, Surakameth; Chantratita, Wasun; Tandayya, Pichaya; Aulchenko, Yurii S
2010-04-29
Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL.
PuLSE: Quality control and quantification of peptide sequences explored by phage display libraries.
Shave, Steven; Mann, Stefan; Koszela, Joanna; Kerr, Alastair; Auer, Manfred
2018-01-01
The design of highly diverse phage display libraries is based on assumption that DNA bases are incorporated at similar rates within the randomized sequence. As library complexity increases and expected copy numbers of unique sequences decrease, the exploration of library space becomes sparser and the presence of truly random sequences becomes critical. We present the program PuLSE (Phage Library Sequence Evaluation) as a tool for assessing randomness and therefore diversity of phage display libraries. PuLSE runs on a collection of sequence reads in the fastq file format and generates tables profiling the library in terms of unique DNA sequence counts and positions, translated peptide sequences, and normalized 'expected' occurrences from base to residue codon frequencies. The output allows at-a-glance quantitative quality control of a phage library in terms of sequence coverage both at the DNA base and translated protein residue level, which has been missing from toolsets and literature. The open source program PuLSE is available in two formats, a C++ source code package for compilation and integration into existing bioinformatics pipelines and precompiled binaries for ease of use.
Robotic tape library system level testing at NSA: Present and planned
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1994-01-01
In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.
Lisi, Simonetta; Chirichella, Michele; Arisi, Ivan; Goracci, Martina; Cremisi, Federico; Cattaneo, Antonino
2017-01-01
Antibody libraries are important resources to derive antibodies to be used for a wide range of applications, from structural and functional studies to intracellular protein interference studies to developing new diagnostics and therapeutics. Whatever the goal, the key parameter for an antibody library is its complexity (also known as diversity), i.e. the number of distinct elements in the collection, which directly reflects the probability of finding in the library an antibody against a given antigen, of sufficiently high affinity. Quantitative evaluation of antibody library complexity and quality has been for a long time inadequately addressed, due to the high similarity and length of the sequences of the library. Complexity was usually inferred by the transformation efficiency and tested either by fingerprinting and/or sequencing of a few hundred random library elements. Inferring complexity from such a small sampling is, however, very rudimental and gives limited information about the real diversity, because complexity does not scale linearly with sample size. Next-generation sequencing (NGS) has opened new ways to tackle the antibody library complexity quality assessment. However, much remains to be done to fully exploit the potential of NGS for the quantitative analysis of antibody repertoires and to overcome current limitations. To obtain a more reliable antibody library complexity estimate here we show a new, PCR-free, NGS approach to sequence antibody libraries on Illumina platform, coupled to a new bioinformatic analysis and software (Diversity Estimator of Antibody Library, DEAL) that allows to reliably estimate the complexity, taking in consideration the sequencing error. PMID:28505201
Fantini, Marco; Pandolfini, Luca; Lisi, Simonetta; Chirichella, Michele; Arisi, Ivan; Terrigno, Marco; Goracci, Martina; Cremisi, Federico; Cattaneo, Antonino
2017-01-01
Antibody libraries are important resources to derive antibodies to be used for a wide range of applications, from structural and functional studies to intracellular protein interference studies to developing new diagnostics and therapeutics. Whatever the goal, the key parameter for an antibody library is its complexity (also known as diversity), i.e. the number of distinct elements in the collection, which directly reflects the probability of finding in the library an antibody against a given antigen, of sufficiently high affinity. Quantitative evaluation of antibody library complexity and quality has been for a long time inadequately addressed, due to the high similarity and length of the sequences of the library. Complexity was usually inferred by the transformation efficiency and tested either by fingerprinting and/or sequencing of a few hundred random library elements. Inferring complexity from such a small sampling is, however, very rudimental and gives limited information about the real diversity, because complexity does not scale linearly with sample size. Next-generation sequencing (NGS) has opened new ways to tackle the antibody library complexity quality assessment. However, much remains to be done to fully exploit the potential of NGS for the quantitative analysis of antibody repertoires and to overcome current limitations. To obtain a more reliable antibody library complexity estimate here we show a new, PCR-free, NGS approach to sequence antibody libraries on Illumina platform, coupled to a new bioinformatic analysis and software (Diversity Estimator of Antibody Library, DEAL) that allows to reliably estimate the complexity, taking in consideration the sequencing error.
Amino acid substitutions in random mutagenesis libraries: lessons from analyzing 3000 mutations.
Zhao, Jing; Frauenkron-Machedjou, Victorine Josiane; Kardashliev, Tsvetan; Ruff, Anna Joëlle; Zhu, Leilei; Bocola, Marco; Schwaneberg, Ulrich
2017-04-01
The quality of amino acid substitution patterns in random mutagenesis libraries is decisive for the success in directed evolution campaigns. In this manuscript, we provide a detailed analysis of the amino acid substitutions by analyzing 3000 mutations of three random mutagenesis libraries (1000 mutations each; epPCR with a low-mutation and a high-mutation frequency and SeSaM-Tv P/P) employing lipase A from Bacillus subtilis (bsla). A comparison of the obtained numbers of beneficial variants in the mentioned three random mutagenesis libraries with a site saturation mutagenesis (SSM) (covering the natural diversity at each amino acid position of BSLA) concludes the diversity analysis. Seventy-six percent of the SeSaM-Tv P/P-generated substitutions yield chemically different amino acid substitutions compared to 64% (epPCR-low) and 69% (epPCR-high). Unique substitutions from one amino acid to others are termed distinct amino acid substitutions. In the SeSaM-Tv P/P library, 35% of all theoretical distinct amino acid substitutions were found in the 1000 mutation library compared to 25% (epPCR-low) and 26% (epPCR-high). Thirty-six percent of distinct amino acid substitutions found in SeSaM-Tv P/P were unobtainable by epPCR-low. Comparison with the SSM library showed that epPCR-low covers 15%, epPCR-high 18%, and SeSaM-Tv P/P 21% of obtainable beneficial amino acid positions. In essence, this study provides first insights on the quality of epPCR and SeSaM-Tv P/P libraries in terms of amino acid substitutions, their chemical differences, and the number of obtainable beneficial amino acid positions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert
Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3)more » Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Roser, Robert; LeCompte, Tom
2015-10-29
Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3)more » Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.« less
Automated Library of the Future: Estrella Mountain Community College Center.
ERIC Educational Resources Information Center
Community & Junior College Libraries, 1991
1991-01-01
Describes plans for the Integrated High Technology Library (IHTL) at the Maricopa County Community College District's new Estrella Mountain campus, covering collaborative planning, the IHTL's design, and guidelines for the new center and campus (e.g., establishing computing/information-access across the curriculum; developing lifelong learners;…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peles, Slaven
2016-11-06
GridKit is a software development kit for interfacing power systems and power grid application software with high performance computing (HPC) libraries developed at National Labs and academia. It is also intended as interoperability layer between different numerical libraries. GridKit is not a standalone application, but comes with a suite of test examples illustrating possible usage.
Training for Techies: A Schoolwise Commitment.
ERIC Educational Resources Information Center
Farmer, Lesley S. J.
1998-01-01
Outlines the Technical Aide (TA) internship program in the Tamalpais Union High School District (Larkspur, California) where students skilled in computer use facilitate technology use within the school. A TA program can provide needed personnel and service in the library as well as highlight library staff competence in technology. Presents tips…
King, D N
1987-01-01
Hospital health sciences libraries represent, for the vast majority of health professionals, the most accessible source for library information and services. Most health professionals do not have available the specialized services of a clinical medical librarian, and rely instead upon general information services for their case-related information needs. The ability of the hospital library to meet these needs and the impact of the information on quality patient care have not been previously examined. A study was conducted in eight hospitals in the Chicago area as a quality assurance project. A total of 176 physicians, nurses, and other health professionals requested information from their hospital libraries related to a current case or clinical situation. They then assessed the quality of information received, its cognitive value, its contribution to patient care, and its impact on case management. Nearly two-thirds of the respondents asserted that they would definitely or probably handle their cases differently as a result of the information provided by the library. Almost all rated the libraries' performance and response highly. An overview of the context and purpose of the study, its methods, selected results, limitations, and conclusions are presented here, as is a review of selected earlier research. PMID:3450340
ERIC Educational Resources Information Center
Carroll, Margaret Aby; Chandler, Yvonne J.
This study examines whether an analysis of characteristics of libraries or information centers and librarians in highly productive companies yields operational models and standards that can improve their efficiency and effectiveness and their parent organization's productivity. Data was collected using an e-mail survey instrument sent to 500 large…
Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas
2013-11-22
Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.
Charon Message-Passing Toolkit for Scientific Computations
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Yan, Jerry (Technical Monitor)
2000-01-01
Charon is a library, callable from C and Fortran, that aids the conversion of structured-grid legacy codes-such as those used in the numerical computation of fluid flows-into parallel, high- performance codes. Key are functions that define distributed arrays, that map between distributed and non-distributed arrays, and that allow easy specification of common communications on structured grids. The library is based on the widely accepted MPI message passing standard. We present an overview of the functionality of Charon, and some representative results.
Fast parallel tandem mass spectral library searching using GPU hardware acceleration.
Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K; Martin, Daniel B
2011-06-03
Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate-limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper, we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching), is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA, which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment.
Broering, N C
1983-01-01
Georgetown University's Library Information System (LIS), an integrated library system designed and implemented at the Dahlgren Memorial Library, is broadly described from an administrative point of view. LIS' functional components consist of eight "user-friendly" modules: catalog, circulation, serials, bibliographic management (including Mini-MEDLINE), acquisitions, accounting, networking, and computer-assisted instruction. This article touches on emerging library services, user education, and computer information services, which are also changing the role of staff librarians. The computer's networking capability brings the library directly to users through personal or institutional computers at remote sites. The proposed Integrated Medical Center Information System at Georgetown University will include interface with LIS through a network mechanism. LIS is being replicated at other libraries, and a microcomputer version is being tested for use in a hospital setting. PMID:6688749
Spectral quality requirements for effluent identification
NASA Astrophysics Data System (ADS)
Czerwinski, R. N.; Seeley, J. A.; Wack, E. C.
2005-11-01
We consider the problem of remotely identifying gaseous materials using passive sensing of long-wave infrared (LWIR) spectral features at hyperspectral resolution. Gaseous materials are distinguishable in the LWIR because of their unique spectral fingerprints. A sensor degraded in capability by noise or limited spectral resolution, however, may be unable to positively identify contaminants, especially if they are present in low concentrations or if the spectral library used for comparisons includes materials with similar spectral signatures. This paper will quantify the relative importance of these parameters and express the relationships between them in a functional form which can be used as a rule of thumb in sensor design or in assessing sensor capability for a specific task. This paper describes the simulation of remote sensing datacontaining a gas cloud.In each simulation, the spectra are degraded in spectral resolution and through the addition of noise to simulate spectra collected by sensors of varying design and capability. We form a trade space by systematically varying the number of sensor spectral channels and signal-to-noise ratio over a range of values. For each scenario, we evaluate the capability of the sensor for gas identification by computing the ratio of the F-statistic for the truth gas tothe same statistic computed over the rest of the library.The effect of the scope of the library is investigated as well, by computing statistics on the variability of the identification capability as the library composition is varied randomly.
Curatr: a web application for creating, curating and sharing a mass spectral library.
Palmer, Andrew; Phapale, Prasad; Fay, Dominik; Alexandrov, Theodore
2018-04-15
We have developed a web application curatr for the rapid generation of high quality mass spectral fragmentation libraries from liquid-chromatography mass spectrometry datasets. Curatr handles datasets from single or multiplexed standards and extracts chromatographic profiles and potential fragmentation spectra for multiple adducts. An intuitive interface helps users to select high quality spectra that are stored along with searchable molecular information, the providence of each standard and experimental metadata. Curatr supports exports to several standard formats for use with third party software or submission to repositories. We demonstrate the use of curatr to generate the EMBL Metabolomics Core Facility spectral library http://curatr.mcf.embl.de. Source code and example data are at http://github.com/alexandrovteam/curatr/. palmer@embl.de. Supplementary data are available at Bioinformatics online.
Stone, B N; Griesinger, G L; Modelevsky, J L
1984-01-01
We describe an interactive computational tool, PLASMAP, which allows the user to electronically store, retrieve, and display circular restriction maps. PLASMAP permits users to construct libraries of plasmid restriction maps as a set of files which may be edited in the laboratory at any time. The display feature of PLASMAP quickly generates device-independent, artist-quality, full-color or monochrome, hard copies or CRT screens of complex, conventional circular restriction maps. PMID:6320096
Gauging Information and Computer Skills for Curriculum Planning
ERIC Educational Resources Information Center
Krueger, Janice M.; Ha, YooJin
2012-01-01
Background: All types of librarians are expected to possess information and computer skills to actively assist patrons in accessing information and in recognizing reputable sources. Mastery of information and computer skills is a high priority for library and information science programs since graduate students have varied multidisciplinary…
ERIC Educational Resources Information Center
Palmini, Cathleen C.
1994-01-01
Describes a survey of Wisconsin academic library support staff that explored the effects of computerization of libraries on work and job satisfaction. Highlights include length of employment; time spent at computer terminals; training; computer background; computers as timesavers; influence of automation on effectiveness; and job frustrations.…
The Gaia FGK benchmark stars. High resolution spectral library
NASA Astrophysics Data System (ADS)
Blanco-Cuaresma, S.; Soubiran, C.; Jofré, P.; Heiter, U.
2014-06-01
Context. An increasing number of high-resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims: We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK benchmark stars) that will allow us to assess stellar analysis methods and calibrate spectroscopic surveys. Methods: High-resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process to homogenize the observed data and assess the quality of the resulting library. Results: We built a high-quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and ensures reproducibility. Additionally, it allows us to quickly adapt the library to specific needs that can arise from future spectroscopic analyses. Based on NARVAL and HARPS data obtained within the Gaia Data Processing and Analysis Consortium (DPAC) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group, and on data retrieved from the ESO-ADP database.The library of spectra is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A98
Harding, Andrew D
2012-01-01
The use of infusion pumps that incorporate "smart" technology (smart pumps) can reduce the risks associated with receiving IV therapies. Smart pump technology incorporates safeguards such as a list of high-alert medications, soft and hard dosage limits, and a drug library that can be tailored to specific patient care areas. Its use can help to improve patient safety and to avoid potentially catastrophic harm associated with medication errors. But when one independent community hospital in Massachusetts switched from older mechanical pumps to smart pumps, it neglected to assign an "owner" to oversee the implementation process. One result was that nurses were using the smart pump library for only 37% of all infusions.To increase pump library usage percentage-thereby reducing the risks associated with infusion and improving patient safety-the hospital undertook a continuous quality improvement project over a four-month period in 2009. With the involvement of direct care nurses, and using quantitative data available from the smart pump software, the nursing quality and pharmacy quality teams identified ways to improve pump and pump library use. A secondary goal was to calculate the hospital's return on investment for the purchase of the smart pumps. Several interventions were developed and, on the first of each month, implemented. By the end of the project, pump library usage had nearly doubled; and the hospital had completely recouped its initial investment.
Construction of human antibody gene libraries and selection of antibodies by phage display.
Frenzel, André; Kügler, Jonas; Wilke, Sonja; Schirrmann, Thomas; Hust, Michael
2014-01-01
Antibody phage display is the most commonly used in vitro selection technology and has yielded thousands of useful antibodies for research, diagnostics, and therapy.The prerequisite for successful generation and development of human recombinant antibodies using phage display is the construction of a high-quality antibody gene library. Here, we describe the methods for the construction of human immune and naive scFv gene libraries.The success also depends on the panning strategy for the selection of binders from these libraries. In this article, we describe a panning strategy that is high-throughput compatible and allows parallel selection in microtiter plates.
Construction of a filamentous phage display peptide library.
Fagerlund, Annette; Myrset, Astrid Hilde; Kulseth, Mari Ann
2014-01-01
The concept of phage display is based on insertion of random oligonucleotides at an appropriate location within a structural gene of a bacteriophage. The resulting phage will constitute a library of random peptides displayed on the surface of the bacteriophages, with the encoding genotype packaged within each phage particle. Using a phagemid/helper phage system, the random peptides are interspersed between wild-type coat proteins. Libraries of phage-expressed peptides may be used to search for novel peptide ligands to target proteins. The success of finding a peptide with a desired property in a given library is highly dependent on the diversity and quality of the library. The protocols in this chapter describe the construction of a high-diversity library of phagemid vector encoding fusions of the phage coat protein pVIII with random peptides, from which a phage library displaying random peptides can be prepared.
Formal specification of human-computer interfaces
NASA Technical Reports Server (NTRS)
Auernheimer, Brent
1990-01-01
A high-level formal specification of a human computer interface is described. Previous work is reviewed and the ASLAN specification language is described. Top-level specifications written in ASLAN for a library and a multiwindow interface are discussed.
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
ERIC Educational Resources Information Center
Mizer, Linda; And Others
1990-01-01
Includes 12 articles that suggest activities to involve junior and senior high school students with their school libraries. Suggestions include a program to promote the reading of quality books; the use of questionnaires to improve individualized service; a checklist for book fairs; library clubs; student book reviewers; booktalks; research…
Using Web sites on quality health care for teaching consumers in public libraries.
Oermann, Marilyn H; Lesley, Marsha L; VanderWal, Jillon S
2005-01-01
More and more consumers are searching the Internet for health information. Health Web sites vary in quality, though, and not all consumers are aware of the need to evaluate the information they find on the Web. Nurses and other health providers involved in patient education can evaluate Web sites and suggest quality sites for patients to use. This article describes a project we implemented in 2 public libraries to educate consumers about quality health care and patient safety using Web sites that we had evaluated earlier. Participants (n = 103) completed resources on health care quality, questions patients should ask about their diagnoses and treatment options, changes in Medicare and Medicare options or ways to make their health benefits work for them, and tips to help prevent medical errors. Most consumers were highly satisfied with the Web sites and the information they learned on quality care from these resources. Many participants did not have Internet access at home or work and instead used the library to search the Web. Information about the Web sites used in this project and other sites on quality care can be made available in libraries and community settings and as part of patient education resources in hospitals. The Web provides easy access for consumers to information about patient safety initiatives and health care quality in general.
Assembling short reads from jumping libraries with large insert sizes.
Vasilinetc, Irina; Prjibelski, Andrey D; Gurevich, Alexey; Korobeynikov, Anton; Pevzner, Pavel A
2015-10-15
Advances in Next-Generation Sequencing technologies and sample preparation recently enabled generation of high-quality jumping libraries that have a potential to significantly improve short read assemblies. However, assembly algorithms have to catch up with experimental innovations to benefit from them and to produce high-quality assemblies. We present a new algorithm that extends recently described exSPAnder universal repeat resolution approach to enable its applications to several challenging data types, including jumping libraries generated by the recently developed Illumina Nextera Mate Pair protocol. We demonstrate that, with these improvements, bacterial genomes often can be assembled in a few contigs using only a single Nextera Mate Pair library of short reads. Described algorithms are implemented in C++ as a part of SPAdes genome assembler, which is freely available at bioinf.spbau.ru/en/spades. ap@bioinf.spbau.ru Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Chat reference service in medical libraries: part 2--Trends in medical school libraries.
Dee, Cheryl R
2003-01-01
An increasing number of medical school libraries offer chat service to provide immediate, high quality information at the time and point of need to students, faculty, staff, and health care professionals. Part 2 of Chat Reference Service in Medical Libraries presents a snapshot of the current trends in chat reference service in medical school libraries. In late 2002, 25 (21%) medical school libraries provided chat reference. Trends in chat reference services in medical school libraries were compiled from an exploration of medical school library Web sites and informal correspondence from medical school library personnel. Many medical libraries are actively investigating and planning new chat reference services, while others have decided not to pursue chat reference at this time. Anecdotal comments from medical school library staff provide insights into chat reference service.
Chaste: An Open Source C++ Library for Computational Physiology and Biology
Mirams, Gary R.; Arthurs, Christopher J.; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Corrias, Alberto; Davit, Yohan; Dunn, Sara-Jane; Fletcher, Alexander G.; Harvey, Daniel G.; Marsh, Megan E.; Osborne, James M.; Pathmanathan, Pras; Pitt-Francis, Joe; Southern, James; Zemzemi, Nejib; Gavaghan, David J.
2013-01-01
Chaste — Cancer, Heart And Soft Tissue Environment — is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to ‘re-invent the wheel’ with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials. PMID:23516352
Measuring Up...And Gladly Would (S)he Learn, and Gladly Teach.
ERIC Educational Resources Information Center
Gordon, Carol
2001-01-01
Presents a dramatization of a high school library in 1968, a place with wooden bookcases, card catalog, wooden furniture and tiptoeing librarian through 30 years of changes to a library media center with banks of computers, librarian-teacher cooperation, emphasis on authentic assessment, and the satisfaction of librarian, teachers, and students…
COM: Decisions and Applications in a Small University Library.
ERIC Educational Resources Information Center
Schwarz, Philip J.
Computer-output microfilm (COM) is used at the University of Wisconsin-Stout Library to generate reports from its major machine readable data bases. Conditions indicating the need to convert to COM include existence of a machine readable data base and high cost of report production. Advantages and disadvantages must also be considered before…
NASA Technical Reports Server (NTRS)
Grams, R. R.
1982-01-01
A system designed to access a large range of available medical textbook information in an online interactive fashion is described. A high level query type database manager, INQUIRE, is used. Operating instructions, system flow diagrams, database descriptions, text generation, and error messages are discussed. User information is provided.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
ERIC Educational Resources Information Center
Shank, Russell
Access to scientific and technical information is essential to the conduct of high quality research and development work. Indonesia's scientists and engineers in Government research institutes are generally not being well-served by their own libraries. The most serious deficiencies are: (1) inadequately trained library staffs, (2) lack of…
The Control Point Library Building System. [for Landsat MSS and RBV geometric image correction
NASA Technical Reports Server (NTRS)
Niblack, W.
1981-01-01
The Earth Resources Observation System (EROS) Data Center in Sioux Falls, South Dakota distributes precision corrected Landsat MSS and RBV data. These data are derived from master data tapes produced by the Master Data Processor (MDP), NASA's system for computing and applying corrections to the data. Included in the MDP is the Control Point Library Building System (CPLBS), an interactive, menu-driven system which permits a user to build and maintain libraries of control points. The control points are required to achieve the high geometric accuracy desired in the output MSS and RBV data. This paper describes the processing performed by CPLBS, the accuracy of the system, and the host computer and special image viewing equipment employed.
ERIC Educational Resources Information Center
Sullivan, Todd
Using an IBM System/360 Model 50 computer, the New York Statewide Film Library Network schedules film use, reports on materials handling and statistics, and provides for interlibrary loan of films. Communications between the film libraries and the computer are maintained by Teletype model 33 ASR Teletypewriter terminals operating on TWX…
Introduction to the Use of Computers in Libraries: A Textbook for the Non-Technical Student.
ERIC Educational Resources Information Center
Ogg, Harold C.
This book outlines computing and information science from the perspective of what librarians and educators need to do with computer technology and how it can help them perform their jobs more efficiently. It provides practical explanations and library applications for non-technical users of desktop computers and other library automation tools.…
ERIC Educational Resources Information Center
Tunon, Johanna; Brydges, Bruce
2005-01-01
University libraries are becoming increasingly aware of the need to assess the quality of students' information literacy and library research skills and to use this assessment data to effectively improve the quality of university library services to graduate programs. However, libraries have had difficulties finding ways to accomplish this both…
Foight, Glenna Wink; Chen, T. Scott; Richman, Daniel; Keating, Amy E.
2017-01-01
Peptide reagents with high affinity or specificity for their target protein interaction partner are of utility for many important applications. Optimization of peptide binding by screening large libraries is a proven and powerful approach. Libraries designed to be enriched in peptide sequences that are predicted to have desired affinity or specificity characteristics are more likely to yield success than random mutagenesis. We present a library optimization method in which the choice of amino acids to encode at each peptide position can be guided by available experimental data or structure-based predictions. We discuss how to use analysis of predicted library performance to inform rounds of library design. Finally, we include protocols for more complex library design procedures that consider the chemical diversity of the amino acids at each peptide position and optimize a library score based on a user-specified input model. PMID:28236241
Foight, Glenna Wink; Chen, T Scott; Richman, Daniel; Keating, Amy E
2017-01-01
Peptide reagents with high affinity or specificity for their target protein interaction partner are of utility for many important applications. Optimization of peptide binding by screening large libraries is a proven and powerful approach. Libraries designed to be enriched in peptide sequences that are predicted to have desired affinity or specificity characteristics are more likely to yield success than random mutagenesis. We present a library optimization method in which the choice of amino acids to encode at each peptide position can be guided by available experimental data or structure-based predictions. We discuss how to use analysis of predicted library performance to inform rounds of library design. Finally, we include protocols for more complex library design procedures that consider the chemical diversity of the amino acids at each peptide position and optimize a library score based on a user-specified input model.
Havery Mudd 2014-2015 Computer Science Conduit Clinic Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aspesi, G; Bai, J; Deese, R
2015-05-12
Conduit, a new open-source library developed at Lawrence Livermore National Laboratories, provides a C++ application programming interface (API) to describe and access scientific data. Conduit’s primary use is for inmemory data exchange in high performance computing (HPC) applications. Our team tested and improved Conduit to make it more appealing to potential adopters in the HPC community. We extended Conduit’s capabilities by prototyping four libraries: one for parallel communication using MPI, one for I/O functionality, one for aggregating performance data, and one for data visualization.
Fast parallel tandem mass spectral library searching using GPU hardware acceleration
Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K.; Martin, Daniel B.
2011-01-01
Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching) is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment. PMID:21545112
The Organizational Role of Web Services
ERIC Educational Resources Information Center
Mitchell, Erik
2011-01-01
The workload of Web librarians is already split between Web-related and other library tasks. But today's technological environment has created new implications for existing services and new demands for staff time. It is time to reconsider how libraries can best allocate resources to provide effective Web services. Delivering high-quality services…
ERIC Educational Resources Information Center
Alrashid, Tareq M.; Barker, James A.; Christian, Brian S.; Cox, Steven C.; Rabne, Michael W.; Slotta, Elizabeth A.; Upthegrove, Luella R.
1998-01-01
Describes Case Western Reserve University's (CWRU's) digital library project that examines the networked delivery of full-text materials and high-quality images to provide students excellent supplemental instructional resources delivered directly to their dormitory rooms. Reviews intellectual property (IP) management requirements and describes…
Eating and Reading in the Library.
ERIC Educational Resources Information Center
Trelease, Jim; Krashen, Stephen
1996-01-01
This article, citing the example of large bookselling chains who offer cafes in their stores, advocates the provision of food and drink in school libraries as well. High-quality food, made available by vending machine, would increase levels of wellness, energy, and teachability. Protests involving mess, lack of money, and difficulties with parents…
Young Adults Deserve the Best: YALSA's Competencies in Action
ERIC Educational Resources Information Center
Flowers, Sarah
2010-01-01
As high school enrollment continues to rise, the need for effective librarianship serving young adults is greater than ever before. "Young Adults Deserve the Best: Competencies for Librarians Serving Youth," developed by Young Adult Library Services Association (YALSA), is a document outlining areas of focus for providing quality library service…
Application Portable Parallel Library
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
Optimising the Parallelisation of OpenFOAM Simulations
2014-06-01
UNCLASSIFIED UNCLASSIFIED Optimising the Parallelisation of OpenFOAM Simulations Shannon Keough Maritime Division Defence...Science and Technology Organisation DSTO-TR-2987 ABSTRACT The OpenFOAM computational fluid dynamics toolbox allows parallel computation of...performance of a given high performance computing cluster with several OpenFOAM cases, running using a combination of MPI libraries and corresponding MPI
ERIC Educational Resources Information Center
Buczynski, James Andrew
2005-01-01
Developing a library collection to support the curriculum of Canada's largest computer studies school has debunked many myths about collecting computer science and technology information resources. Computer science students are among the heaviest print book and e-book users in the library. Circulation statistics indicate that the demand for print…
JMS Proxy and C/C++ Client SDK
NASA Technical Reports Server (NTRS)
Wolgast, Paul; Pechkam, Paul
2007-01-01
JMS Proxy and C/C++ Client SDK (JMS signifies "Java messaging service" and "SDK" signifies "software development kit") is a software package for developing interfaces that enable legacy programs (here denoted "clients") written in the C and C++ languages to communicate with each other via a JMS broker. This package consists of two main components: the JMS proxy server component and the client C library SDK component. The JMS proxy server component implements a native Java process that receives and responds to requests from clients. This component can run on any computer that supports Java and a JMS client. The client C library SDK component is used to develop a JMS client program running in each affected C or C++ environment, without need for running a Java virtual machine in the affected computer. A C client program developed by use of this SDK has most of the quality-of-service characteristics of standard Java-based client programs, including the following: Durable subscriptions; Asynchronous message receipt; Such standard JMS message qualities as "TimeToLive," "Message Properties," and "DeliveryMode" (as the quoted terms are defined in previously published JMS documentation); and Automatic reconnection of a JMS proxy to a restarted JMS broker.
Computer Information Project for Monographs at the Medical Research Library of Brooklyn
Koch, Michael S.; Kovacs, Helen
1973-01-01
The article describes a resource library's computer-based project that provides cataloging and other bibliographic services and promotes greater use of the book collection. A few studies are cited to show the significance of monographic literature in medical libraries. The educational role of the Medical Research Library of Brooklyn is discussed, both with regard to the parent institution and to smaller medical libraries in the same geographic area. Types of aid given to smaller libraries are enumerated. Information is given on methods for providing machine-produced catalog cards, current awareness notes, and bibliographic lists. Actualities and potentialities of the computer project are discussed. PMID:4579767
Computer-Based Training for Library Staff: From Demonstration to Continuing Program.
ERIC Educational Resources Information Center
Bayne, Pauline S.
1993-01-01
Describes a demonstration project developed at the University of Tennessee (Knoxville) libraries to train nonprofessional library staff with computer-based training using HyperCard that was created by librarians rather than by computer programmers. Evaluation methods are discussed, including formative and summative evaluation; and modifications…
Fundamentals of Library Automation and Technology. Participant Workbook.
ERIC Educational Resources Information Center
Bridge, Frank; Walton, Robert
This workbook presents outlines of topics to be covered during a two-day workshop on the fundamentals for library automation. Topics for the first day include: (1) Introduction; (2) Computer Technology--A Historical Overview; (3) Evolution of Library Automation; (4) Computer Hardware Technology--An Introduction; (5) Computer Software…
The Cost of Quality--Its Application to Libraries.
ERIC Educational Resources Information Center
Franklin, Brinley
1994-01-01
Examines the conceptual basis for the cost of quality and its application to libraries. The framework for analysis of this conceptual basis includes definitions of the cost of quality; a brief historical review of the cost of quality; and the application of quality cost to libraries, including an explanation of how quality costs respond to quality…
ParallABEL: an R library for generalized parallelization of genome-wide association studies
2010-01-01
Background Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Results Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Conclusions Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL. PMID:20429914
NASA Astrophysics Data System (ADS)
Cunningham, Sally Jo
The current crop of digital libraries for the computing community are strongly grounded in the conventional library paradigm: they provide indexes to support searching of collections of research papers. As such, these digital libraries are relatively impoverished; the present computing digital libraries omit many of the documents and resources that are currently available to computing researchers, and offer few browsing structures. These computing digital libraries were built 'top down': the resources and collection contents are forced to fit an existing digital library architecture. A 'bottom up' approach to digital library development would begin with an investigation of a community's information needs and available documents, and then design a library to organize those documents in such a way as to fulfill the community's needs. The 'home grown', informal information resources developed by and for the machine learning community are examined as a case study, to determine the types of information and document organizations 'native' to this group of researchers. The insights gained in this type of case study can be used to inform construction of a digital library tailored to this community.
Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Schneider, Gisbert
2014-01-07
Using the example of the Ugi three-component reaction we report a fast and efficient microfluidic-assisted entry into the imidazopyridine scaffold, where building block prioritization was coupled to a new computational method for predicting ligand-target associations. We identified an innovative GPCR-modulating combinatorial chemotype featuring ligand-efficient adenosine A1/2B and adrenergic α1A/B receptor antagonists. Our results suggest the tight integration of microfluidics-assisted synthesis with computer-based target prediction as a viable approach to rapidly generate bioactivity-focused combinatorial compound libraries with high success rates. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mühlebach, Anneke; Adam, Joachim; Schön, Uwe
2011-11-01
Automated medicinal chemistry (parallel chemistry) has become an integral part of the drug-discovery process in almost every large pharmaceutical company. Parallel array synthesis of individual organic compounds has been used extensively to generate diverse structural libraries to support different phases of the drug-discovery process, such as hit-to-lead, lead finding, or lead optimization. In order to guarantee effective project support, efficiency in the production of compound libraries has been maximized. As a consequence, also throughput in chromatographic purification and analysis has been adapted. As a recent trend, more laboratories are preparing smaller, yet more focused libraries with even increasing demands towards quality, i.e. optimal purity and unambiguous confirmation of identity. This paper presents an automated approach how to combine effective purification and structural conformation of a lead optimization library created by microwave-assisted organic synthesis. The results of complementary analytical techniques such as UHPLC-HRMS and NMR are not only regarded but even merged for fast and easy decision making, providing optimal quality of compound stock. In comparison with the previous procedures, throughput times are at least four times faster, while compound consumption could be decreased more than threefold. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Academic Libraries and Quality: An Analysis and Evaluation Framework
ERIC Educational Resources Information Center
Atkinson, Jeremy
2017-01-01
The paper proposes and describes a framework for academic library quality to be used by new and more experienced library practitioners and by others involved in considering the quality of academic libraries' services and provision. The framework consists of eight themes and a number of questions to examine within each theme. The framework was…
NASA Astrophysics Data System (ADS)
Linsky, Jeffrey
2017-08-01
We propose to compute state-of-the-art model atmospheres (photospheres, chromospheres, transition regions and coronae) of the 4 K and 7 M exoplanet host stars observed by HST in the MUSCLES Treasury Survey, the nearest host star Proxima Centauri, and TRAPPIST-1. Our semi-empirical models will fit theunique high-resolution panchromatic (X-ray to infrared) spectra of these stars in the MAST High-Level Science Products archive consisting of COS and STIS UV spectra and near-simultaneous Chandra, XMM-Newton, and ground-based observations. We will compute models with the fully tested SSRPM computer software incorporating 52 atoms and ions in full non-LTE (435,986 spectral lines) and the 20 most-abundant diatomic molecules (about 2 million lines). This code has successfully fit the panchromatic spectrum of the M1.5 V exoplanet host star GJ 832 (Fontenla et al. 2016), the first M star with such a detailed model, and solar spectra. Our models will (1) predict the unobservable extreme-UV spectra, (2) determine radiative energy losses and balancing heating rates throughout these atmospheres, (3) compute a stellar irradiance library needed to describe the radiation environment of potentially habitable exoplanets to be studied by TESS and JWST, and (4) in the long post-HST era when UV observations will not be possible, the stellar irradiance library will be a powerful tool for predicting the panchromatic spectra of host stars that have only limited spectral coverage, in particular no UV spectra. The stellar models and spectral irradiance library will be placed quickly in MAST.
Lin, Tung-Cheng; Hwang, Lih-Lian; Lai, Yung-Jye
2017-05-17
Previous studies have reported that credibility and content (argument quality) are the most critical factors affecting the quality of health information and its acceptance and use; however, this causal relationship merits further investigation in the context of health education. Moreover, message recipients' prior knowledge may moderate these relationships. This study used the elaboration likelihood model to determine the main effects of argument quality, source credibility and the moderating effect of self-reported diabetes knowledge on message attitudes. A between-subjects experimental design using an educational message concerning diabetes for manipulation was applied to validate the effects empirically. A total of 181 participants without diabetes were recruited from the Department of Health, Taipei City Government. Four group messages were manipulated in terms of argument quality (high and low) × source credibility (high and low). Argument quality and source credibility of health information significantly influenced the attitude of message recipients. The participants with high self-reported knowledge participants exhibited significant disapproval for messages with low argument quality. Effective health information should provide objective descriptions and cite reliable sources; in addition, it should provide accurate, customised messages for recipients who have high background knowledge level and ability to discern message quality. © 2017 Health Libraries Group Health Information & Libraries Journal.
Lavine, Barry K; White, Collin G; Allen, Matthew D; Fasasi, Ayuba; Weakley, Andrew
2016-10-01
A prototype library search engine has been further developed to search the infrared spectral libraries of the paint data query database to identify the line and model of a vehicle from the clear coat, surfacer-primer, and e-coat layers of an intact paint chip. For this study, search prefilters were developed from 1181 automotive paint systems spanning 3 manufacturers: General Motors, Chrysler, and Ford. The best match between each unknown and the spectra in the hit list generated by the search prefilters was identified using a cross-correlation library search algorithm that performed both a forward and backward search. In the forward search, spectra were divided into intervals and further subdivided into windows (which corresponds to the time lag for the comparison) within those intervals. The top five hits identified in each search window were compiled; a histogram was computed that summarized the frequency of occurrence for each library sample, with the IR spectra most similar to the unknown flagged. The backward search computed the frequency and occurrence of each line and model without regard to the identity of the individual spectra. Only those lines and models with a frequency of occurrence greater than or equal to 20% were included in the final hit list. If there was agreement between the forward and backward search results, the specific line and model common to both hit lists was always the correct assignment. Samples assigned to the same line and model by both searches are always well represented in the library and correlate well on an individual basis to specific library samples. For these samples, one can have confidence in the accuracy of the match. This was not the case for the results obtained using commercial library search algorithms, as the hit quality index scores for the top twenty hits were always greater than 99%. Copyright © 2016 Elsevier B.V. All rights reserved.
Computer Science Professionals and Greek Library Science
ERIC Educational Resources Information Center
Dendrinos, Markos N.
2008-01-01
This paper attempts to present the current state of computer science penetration into librarianship in terms of both workplace and education issues. The shift from material libraries into digital libraries is mirrored in the corresponding shift from librarians into information scientists. New library data and metadata, as well as new automated…
Structure-based design of combinatorial mutagenesis libraries
Verma, Deeptak; Grigoryan, Gevorg; Bailey-Kellogg, Chris
2015-01-01
The development of protein variants with improved properties (thermostability, binding affinity, catalytic activity, etc.) has greatly benefited from the application of high-throughput screens evaluating large, diverse combinatorial libraries. At the same time, since only a very limited portion of sequence space can be experimentally constructed and tested, an attractive possibility is to use computational protein design to focus libraries on a productive portion of the space. We present a general-purpose method, called “Structure-based Optimization of Combinatorial Mutagenesis” (SOCoM), which can optimize arbitrarily large combinatorial mutagenesis libraries directly based on structural energies of their constituents. SOCoM chooses both positions and substitutions, employing a combinatorial optimization framework based on library-averaged energy potentials in order to avoid explicitly modeling every variant in every possible library. In case study applications to green fluorescent protein, β-lactamase, and lipase A, SOCoM optimizes relatively small, focused libraries whose variants achieve energies comparable to or better than previous library design efforts, as well as larger libraries (previously not designable by structure-based methods) whose variants cover greater diversity while still maintaining substantially better energies than would be achieved by representative random library approaches. By allowing the creation of large-scale combinatorial libraries based on structural calculations, SOCoM promises to increase the scope of applicability of computational protein design and improve the hit rate of discovering beneficial variants. While designs presented here focus on variant stability (predicted by total energy), SOCoM can readily incorporate other structure-based assessments, such as the energy gap between alternative conformational or bound states. PMID:25611189
Structure-based design of combinatorial mutagenesis libraries.
Verma, Deeptak; Grigoryan, Gevorg; Bailey-Kellogg, Chris
2015-05-01
The development of protein variants with improved properties (thermostability, binding affinity, catalytic activity, etc.) has greatly benefited from the application of high-throughput screens evaluating large, diverse combinatorial libraries. At the same time, since only a very limited portion of sequence space can be experimentally constructed and tested, an attractive possibility is to use computational protein design to focus libraries on a productive portion of the space. We present a general-purpose method, called "Structure-based Optimization of Combinatorial Mutagenesis" (SOCoM), which can optimize arbitrarily large combinatorial mutagenesis libraries directly based on structural energies of their constituents. SOCoM chooses both positions and substitutions, employing a combinatorial optimization framework based on library-averaged energy potentials in order to avoid explicitly modeling every variant in every possible library. In case study applications to green fluorescent protein, β-lactamase, and lipase A, SOCoM optimizes relatively small, focused libraries whose variants achieve energies comparable to or better than previous library design efforts, as well as larger libraries (previously not designable by structure-based methods) whose variants cover greater diversity while still maintaining substantially better energies than would be achieved by representative random library approaches. By allowing the creation of large-scale combinatorial libraries based on structural calculations, SOCoM promises to increase the scope of applicability of computational protein design and improve the hit rate of discovering beneficial variants. While designs presented here focus on variant stability (predicted by total energy), SOCoM can readily incorporate other structure-based assessments, such as the energy gap between alternative conformational or bound states. © 2015 The Protein Society.
Workplace management of upper limb disorders: a systematic review.
Dick, F D; Graveling, R A; Munro, W; Walker-Bone, K
2011-01-01
Upper limb pain is common among working-aged adults and a frequent cause of absenteeism. To systematically review the evidence for workplace interventions in four common upper limb disorders. Systematic review of English articles using Medline, Embase, Cinahl, AMED, Physiotherapy Evidence Database PEDro (carpal tunnel syndrome and non-specific arm pain only) and Cochrane Library. Study inclusion criteria were randomized controlled trials, cohort studies or systematic reviews employing any workplace intervention for workers with carpal tunnel syndrome, non-specific arm pain, extensor tenosynovitis or lateral epicondylitis. Papers were selected by a single reviewer and appraised by two reviewers independently using methods based on Scottish Intercollegiate Guidelines Network (SIGN) methodology. 1532 abstracts were identified, 28 papers critically appraised and four papers met the minimum quality standard (SIGN grading + or ++) for inclusion. There was limited evidence that computer keyboards with altered force displacement characteristics or altered geometry were effective in reducing carpal tunnel syndrome symptoms. There was limited, but high quality, evidence that multi-disciplinary rehabilitation for non-specific musculoskeletal arm pain was beneficial for those workers absent from work for at least four weeks. In adults with tenosynovitis there was limited evidence that modified computer keyboards were effective in reducing symptoms. There was a lack of high quality evidence to inform workplace management of lateral epicondylitis. Further research is needed focusing on occupational management of upper limb disorders. Where evidence exists, workplace outcomes (e.g. successful return to pre-morbid employment; lost working days) are rarely addressed.
Books, Bytes, and Bridges: Libraries and Computer Centers in Academic Institutions.
ERIC Educational Resources Information Center
Hardesty, Larry, Ed.
This book about the relationship between computer centers and libraries at academic institutions contains the following chapters: (1) "A History of the Rhetoric and Reality of Library and Computing Relationships" (Peggy Seiden and Michael D. Kathman); (2) "An Issue in Search of a Metaphor: Readings on the Marriageability of…
The "Magic" of Wireless Access in the Library
ERIC Educational Resources Information Center
Balas, Janet L.
2006-01-01
It seems that the demand for public access computers grows exponentially every time a library network is expanded, making it impossible to ever have enough computers available for patrons. One solution that many libraries are implementing to ease the demand for public computer use is to offer wireless technology that allows patrons to bring in…
Computer Software: Copyright and Licensing Considerations for Schools and Libraries. ERIC Digest.
ERIC Educational Resources Information Center
Reed, Mary Hutchings
This digest notes that the terms and conditions of computer software package license agreements control the use of software in schools and libraries, and examines the implications of computer software license agreements for classroom use and for library lending policies. Guidelines are provided for interpreting the Copyright Act, and insuring the…
Determination of a Screening Metric for High Diversity DNA Libraries.
Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A
2016-01-01
The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.
Rethinking Customer Service Training: A Curricular Solution to a Familiar Problem
ERIC Educational Resources Information Center
Epps, Sharon K.; Kidd, Judith; Negro, Toni; Sayles, Sheridan L.
2016-01-01
High-quality customer service is an important aim of the library experience. Its importance is evidenced by attention given to the topic in scholarly literature and academic conference proceedings. This article describes the challenging process of creating and delivering a blended customer service training curriculum to all library staff working…
Weber, Marcel; Bujak, Emil; Putelli, Alessia; Villa, Alessandra; Matasci, Mattia; Gualandi, Laura; Hemmerle, Teresa; Wulhfard, Sarah; Neri, Dario
2014-01-01
Several synthetic antibody phage display libraries have been created and used for the isolation of human monoclonal antibodies. The performance of antibody libraries, which is usually measured in terms of their ability to yield high-affinity binding specificities against target proteins of interest, depends both on technical aspects (such as library size and quality of cloning) and on design features (which influence the percentage of functional clones in the library and their ability to be used for practical applications). Here, we describe the design, construction and characterization of a combinatorial phage display library, comprising over 40 billion human antibody clones in single-chain fragment variable (scFv) format. The library was designed with the aim to obtain highly stable antibody clones, which can be affinity-purified on protein A supports, even when used in scFv format. The library was found to be highly functional, as >90% of randomly selected clones expressed the corresponding antibody. When selected against more than 15 antigens from various sources, the library always yielded specific and potent binders, at a higher frequency compared to previous antibody libraries. To demonstrate library performance in practical biomedical research projects, we isolated the human antibody G5, which reacts both against human and murine forms of the alternatively spliced BCD segment of tenascin-C, an extracellular matrix component frequently over-expressed in cancer and in chronic inflammation. The new library represents a useful source of binding specificities, both for academic research and for the development of antibody-based therapeutics.
A Highly Functional Synthetic Phage Display Library Containing over 40 Billion Human Antibody Clones
Weber, Marcel; Bujak, Emil; Putelli, Alessia; Villa, Alessandra; Matasci, Mattia; Gualandi, Laura; Hemmerle, Teresa; Wulhfard, Sarah; Neri, Dario
2014-01-01
Several synthetic antibody phage display libraries have been created and used for the isolation of human monoclonal antibodies. The performance of antibody libraries, which is usually measured in terms of their ability to yield high-affinity binding specificities against target proteins of interest, depends both on technical aspects (such as library size and quality of cloning) and on design features (which influence the percentage of functional clones in the library and their ability to be used for practical applications). Here, we describe the design, construction and characterization of a combinatorial phage display library, comprising over 40 billion human antibody clones in single-chain fragment variable (scFv) format. The library was designed with the aim to obtain highly stable antibody clones, which can be affinity-purified on protein A supports, even when used in scFv format. The library was found to be highly functional, as >90% of randomly selected clones expressed the corresponding antibody. When selected against more than 15 antigens from various sources, the library always yielded specific and potent binders, at a higher frequency compared to previous antibody libraries. To demonstrate library performance in practical biomedical research projects, we isolated the human antibody G5, which reacts both against human and murine forms of the alternatively spliced BCD segment of tenascin-C, an extracellular matrix component frequently over-expressed in cancer and in chronic inflammation. The new library represents a useful source of binding specificities, both for academic research and for the development of antibody-based therapeutics. PMID:24950200
SeSaM-Tv-II generates a protein sequence space that is unobtainable by epPCR.
Mundhada, Hemanshu; Marienhagen, Jan; Scacioc, Andreea; Schenk, Alexander; Roccatano, Danilo; Schwaneberg, Ulrich
2011-07-04
Generating high-quality mutant libraries in which each amino acid is equally targeted and substituted in a chemically diverse manner is crucial to obtain improved variants in small mutant libraries. The sequence saturation mutagenesis method (SeSaM-Tv(+) ) offers the opportunity to generate such high-quality mutant libraries by introducing consecutive mutations and by enriching transversions. In this study, automated gel electrophoresis, real-time quantitative PCR, and a phosphorimager quantification system were developed and employed to optimize each step of previously reported SeSaM-Tv(+) method. Advancements of the SeSaM-Tv(+) protocol and the use of a novel DNA polymerase quadrupled the number of transversions, by doubling the fraction of consecutive mutations (from 16.7 to 37.1 %). About 33 % of all amino acid substitutions observed in a model library are rarely introduced by epPCR methods, and around 10 % of all clones carried amino acid substitutions that are unobtainable by epPCR. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Microbiological quality of indoor air in university libraries.
Hayleeyesus, Samuel Fekadu; Manaye, Abayneh Melaku
2014-05-01
To evaluate the concentration of bacteria and fungi in the indoor environment of Jimma University libraries, so as to estimate the health hazard and to create standards for indoor air quality control. The microbial quality of indoor air of eight libraries of Jimma University was determined. The settle plate method using open Petri-dishes containing different culture media was employed to collect sample twice daily. Isolates were identified according to standard methods. The concentrations of bacteria and fungi aerosols in the indoor environment of the university libraries ranged between 367-2595 CFU/m(3). According to the sanitary standards classification of European Commission, almost all the libraries indoor air of Jimma University was heavily contaminated with bacteria and fungi. In spite of their major source difference, the average fungi density found in the indoor air of libraries did appear to follow the same trend with bacterial density (P=0.001). The bacteria isolates included Micrococcus sp., Staphylococcus aureus, Streptococcus pyogenes, Bacillus sp. and Neisseria sp. while Cladosporium sp., Alternaria sp., Penicillium sp. and Aspergillus sp. were the most isolated fungi. The indoor air of all libraries were in the range above highly contaminated according to European Commission classification and the most isolates are considered as potential candidates involved in the establishment of sick building syndromes and often associated with clinical manifestations like allergy, rhinitis, asthma and conjunctivitis. Thus, attention must be given to control those environmental factors which favor the growth and multiplication of microbes in indoor environment of libraries to safeguard the health of users and workers.
1982-01-29
N - Nw .VA COMPUTER PROGRAM USER’S MANUAL FOR . 0FIREFINDER DIGITAL TOPOGRAPHIC DATA VERIFICATION LIBRARY DUBBING SYSTEM VOLUME II DUBBING 29 JANUARY...Digital Topographic Data Verification Library Dubbing System, Volume II, Dubbing 6. PERFORMING ORG. REPORT NUMER 7. AUTHOR(q) S. CONTRACT OR GRANT...Software Library FIREFINDER Dubbing 20. ABSTRACT (Continue an revWee *Ide II necessary end identify by leek mauber) PThis manual describes the computer
The NISTmAb tryptic peptide spectral library for monoclonal antibody characterization.
Dong, Qian; Liang, Yuxue; Yan, Xinjian; Markey, Sanford P; Mirokhin, Yuri A; Tchekhovskoi, Dmitrii V; Bukhari, Tallat H; Stein, Stephen E
2018-04-01
We describe the creation of a mass spectral library composed of all identifiable spectra derived from the tryptic digest of the NISTmAb IgG1κ. The library is a unique reference spectral collection developed from over six million peptide-spectrum matches acquired by liquid chromatography-mass spectrometry (LC-MS) over a wide range of collision energy. Conventional one-dimensional (1D) LC-MS was used for various digestion conditions and 20- and 24-fraction two-dimensional (2D) LC-MS studies permitted in-depth analyses of single digests. Computer methods were developed for automated analysis of LC-MS isotopic clusters to determine the attributes for all ions detected in the 1D and 2D studies. The library contains a selection of over 12,600 high-quality tandem spectra of more than 3,300 peptide ions identified and validated by accurate mass, differential elution pattern, and expected peptide classes in peptide map experiments. These include a variety of biologically modified peptide spectra involving glycosylated, oxidized, deamidated, glycated, and N/C-terminal modified peptides, as well as artifacts. A complete glycation profile was obtained for the NISTmAb with spectra for 58% and 100% of all possible glycation sites in the heavy and light chains, respectively. The site-specific quantification of methionine oxidation in the protein is described. The utility of this reference library is demonstrated by the analysis of a commercial monoclonal antibody (adalimumab, Humira®), where 691 peptide ion spectra are identifiable in the constant regions, accounting for 60% coverage for both heavy and light chains. The NIST reference library platform may be used as a tool for facile identification of the primary sequence and post-translational modifications, as well as the recognition of LC-MS method-induced artifacts for human and recombinant IgG antibodies. Its development also provides a general method for creating comprehensive peptide libraries of individual proteins.
The NISTmAb tryptic peptide spectral library for monoclonal antibody characterization
Dong, Qian; Liang, Yuxue; Yan, Xinjian; Markey, Sanford P.; Mirokhin, Yuri A.; Tchekhovskoi, Dmitrii V.; Bukhari, Tallat H.; Stein, Stephen E.
2018-01-01
ABSTRACT We describe the creation of a mass spectral library composed of all identifiable spectra derived from the tryptic digest of the NISTmAb IgG1κ. The library is a unique reference spectral collection developed from over six million peptide-spectrum matches acquired by liquid chromatography-mass spectrometry (LC-MS) over a wide range of collision energy. Conventional one-dimensional (1D) LC-MS was used for various digestion conditions and 20- and 24-fraction two-dimensional (2D) LC-MS studies permitted in-depth analyses of single digests. Computer methods were developed for automated analysis of LC-MS isotopic clusters to determine the attributes for all ions detected in the 1D and 2D studies. The library contains a selection of over 12,600 high-quality tandem spectra of more than 3,300 peptide ions identified and validated by accurate mass, differential elution pattern, and expected peptide classes in peptide map experiments. These include a variety of biologically modified peptide spectra involving glycosylated, oxidized, deamidated, glycated, and N/C-terminal modified peptides, as well as artifacts. A complete glycation profile was obtained for the NISTmAb with spectra for 58% and 100% of all possible glycation sites in the heavy and light chains, respectively. The site-specific quantification of methionine oxidation in the protein is described. The utility of this reference library is demonstrated by the analysis of a commercial monoclonal antibody (adalimumab, Humira®), where 691 peptide ion spectra are identifiable in the constant regions, accounting for 60% coverage for both heavy and light chains. The NIST reference library platform may be used as a tool for facile identification of the primary sequence and post-translational modifications, as well as the recognition of LC-MS method-induced artifacts for human and recombinant IgG antibodies. Its development also provides a general method for creating comprehensive peptide libraries of individual proteins. PMID:29425077
Scheduling language and algorithm development study. Volume 1: Study summary and overview
NASA Technical Reports Server (NTRS)
1974-01-01
A high level computer programming language and a program library were developed to be used in writing programs for scheduling complex systems such as the space transportation system. The objectives and requirements of the study are summarized and unique features of the specified language and program library are described and related to the why of the objectives and requirements.
Bolef, D
1975-01-01
After ten years of experimentation in computer-assisted cataloging, the Washington University School of Medicine Library has decided to join the Ohio College Library Center network. The history of the library's work preceding this decision is reviewed. The data processing equipment and computers that have permitted librarians to explore different ways of presenting cataloging information are discussed. Certain cataloging processes are facilitated by computer manipulation and printouts, but the intellectual cataloging processes such as descriptive and subject cataloging are not. Networks and shared bibliographic data bases show promise of eliminating the intellectual cataloging for one book by more than one cataloger. It is in this area that future developments can be expected. PMID:1148442
Wang, Jian; Evans, Julian R G
2005-01-01
This paper describes the design, construction, and operation of the London University Search Instrument (LUSI) which was recently commissioned to create and test combinatorial libraries of ceramic compositions. The instrument uses commercially available powders, milled as necessary to create thick-film libraries by ink-jet printing. Multicomponent mixtures are prepared by well plate reformatting of ceramic inks. The library tiles are robotically loaded into a flatbed furnace and, when fired, transferred to a 2-axis high-resolution measurement table fitted with a hot plate where measurements of, for example, optical or electrical properties can be made. Data are transferred to a dedicated high-performance computer. The possibilities for remote interrogation and search steering are discussed.
ERIC Educational Resources Information Center
Anderson, Greg; And Others
1996-01-01
Describes the Computer Science Technical Report Project, one of the earliest investigations into the system engineering of digital libraries which pioneered multiinstitutional collaborative research into technical, social, and legal issues related to the development and implementation of a large, heterogeneous, distributed digital library. (LRW)
ERIC Educational Resources Information Center
Nixon, Carol, Comp.
This book contains presentations from the 17th annual Computers in Libraries Conference. Topics covered include: chatting with a librarian; verbots for library Web sites; collaborative IT (Information Technology) planning at Montgomery County Public Library (Maryland); designing a local government taxonomy; Weblogs; new roles for librarians in…
A Study of Cooperative, Networking, and Computer Activities in Southwestern Libraries.
ERIC Educational Resources Information Center
Corbin, John
The Southwestern Library Association (SWLA) conducted an inventory and study of the SWLA libraries in cooperative, network, and computer activities to collect data for use in planning future activities and in minimizing duplication of efforts. Questionnaires were mailed to 2,060 academic, public, and special libraries in the six SWLA states.…
Townsley, Brad T; Covington, Michael F; Ichihashi, Yasunori; Zumstein, Kristina; Sinha, Neelima R
2015-01-01
Next Generation Sequencing (NGS) is driving rapid advancement in biological understanding and RNA-sequencing (RNA-seq) has become an indispensable tool for biology and medicine. There is a growing need for access to these technologies although preparation of NGS libraries remains a bottleneck to wider adoption. Here we report a novel method for the production of strand specific RNA-seq libraries utilizing the terminal breathing of double-stranded cDNA to capture and incorporate a sequencing adapter. Breath Adapter Directional sequencing (BrAD-seq) reduces sample handling and requires far fewer enzymatic steps than most available methods to produce high quality strand-specific RNA-seq libraries. The method we present is optimized for 3-prime Digital Gene Expression (DGE) libraries and can easily extend to full transcript coverage shotgun (SHO) type strand-specific libraries and is modularized to accommodate a diversity of RNA and DNA input materials. BrAD-seq offers a highly streamlined and inexpensive option for RNA-seq libraries.
Early phase drug discovery: cheminformatics and computational techniques in identifying lead series.
Duffy, Bryan C; Zhu, Lei; Decornez, Hélène; Kitchen, Douglas B
2012-09-15
Early drug discovery processes rely on hit finding procedures followed by extensive experimental confirmation in order to select high priority hit series which then undergo further scrutiny in hit-to-lead studies. The experimental cost and the risk associated with poor selection of lead series can be greatly reduced by the use of many different computational and cheminformatic techniques to sort and prioritize compounds. We describe the steps in typical hit identification and hit-to-lead programs and then describe how cheminformatic analysis assists this process. In particular, scaffold analysis, clustering and property calculations assist in the design of high-throughput screening libraries, the early analysis of hits and then organizing compounds into series for their progression from hits to leads. Additionally, these computational tools can be used in virtual screening to design hit-finding libraries and as procedures to help with early SAR exploration. Copyright © 2012 Elsevier Ltd. All rights reserved.
Technical Note: spektr 3.0-A computational tool for x-ray spectrum modeling and analysis.
Punnoose, J; Xu, J; Sisniega, A; Zbijewski, W; Siewerdsen, J H
2016-08-01
A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. The spektr code generates x-ray spectra (photons/mm(2)/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20-150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30-140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available.
Data exploration, quality control and statistical analysis of ChIP-exo/nexus experiments
Welch, Rene; Chung, Dongjun; Grass, Jeffrey; Landick, Robert
2017-01-01
Abstract ChIP-exo/nexus experiments rely on innovative modifications of the commonly used ChIP-seq protocol for high resolution mapping of transcription factor binding sites. Although many aspects of the ChIP-exo data analysis are similar to those of ChIP-seq, these high throughput experiments pose a number of unique quality control and analysis challenges. We develop a novel statistical quality control pipeline and accompanying R/Bioconductor package, ChIPexoQual, to enable exploration and analysis of ChIP-exo and related experiments. ChIPexoQual evaluates a number of key issues including strand imbalance, library complexity, and signal enrichment of data. Assessment of these features are facilitated through diagnostic plots and summary statistics computed over regions of the genome with varying levels of coverage. We evaluated our QC pipeline with both large collections of public ChIP-exo/nexus data and multiple, new ChIP-exo datasets from Escherichia coli. ChIPexoQual analysis of these datasets resulted in guidelines for using these QC metrics across a wide range of sequencing depths and provided further insights for modelling ChIP-exo data. PMID:28911122
Data exploration, quality control and statistical analysis of ChIP-exo/nexus experiments.
Welch, Rene; Chung, Dongjun; Grass, Jeffrey; Landick, Robert; Keles, Sündüz
2017-09-06
ChIP-exo/nexus experiments rely on innovative modifications of the commonly used ChIP-seq protocol for high resolution mapping of transcription factor binding sites. Although many aspects of the ChIP-exo data analysis are similar to those of ChIP-seq, these high throughput experiments pose a number of unique quality control and analysis challenges. We develop a novel statistical quality control pipeline and accompanying R/Bioconductor package, ChIPexoQual, to enable exploration and analysis of ChIP-exo and related experiments. ChIPexoQual evaluates a number of key issues including strand imbalance, library complexity, and signal enrichment of data. Assessment of these features are facilitated through diagnostic plots and summary statistics computed over regions of the genome with varying levels of coverage. We evaluated our QC pipeline with both large collections of public ChIP-exo/nexus data and multiple, new ChIP-exo datasets from Escherichia coli. ChIPexoQual analysis of these datasets resulted in guidelines for using these QC metrics across a wide range of sequencing depths and provided further insights for modelling ChIP-exo data. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Administrative and Bibliographic Uses of COM (Computer Output Microfilm) in an Academic Library.
ERIC Educational Resources Information Center
Gillham, Virginia; Black, John B.
Computer output microfilm/fiche (COM) combines the speed and laborsaving aspects of computer-based systems with the economy and physical compactness of microforms to provide the medium of the future for library management and information retrieval. The traditional card catalog and printed lists found in every library can be replaced in multiple…
Experiences Using an Open Source Software Library to Teach Computer Vision Subjects
ERIC Educational Resources Information Center
Cazorla, Miguel; Viejo, Diego
2015-01-01
Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…
Proposed Projects and Experiments Fermilab's Tevatron Questions for the Universe Theory Computing High Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator Science in Medicine Follow
Automated software development workstation
NASA Technical Reports Server (NTRS)
Prouty, Dale A.; Klahr, Philip
1988-01-01
A workstation is being developed that provides a computational environment for all NASA engineers across application boundaries, which automates reuse of existing NASA software and designs, and efficiently and effectively allows new programs and/or designs to be developed, catalogued, and reused. The generic workstation is made domain specific by specialization of the user interface, capturing engineering design expertise for the domain, and by constructing/using a library of pertinent information. The incorporation of software reusability principles and expert system technology into this workstation provide the obvious benefits of increased productivity, improved software use and design reliability, and enhanced engineering quality by bringing engineering to higher levels of abstraction based on a well tested and classified library.
DMG-α--a computational geometry library for multimolecular systems.
Szczelina, Robert; Murzyn, Krzysztof
2014-11-24
The DMG-α library grants researchers in the field of computational biology, chemistry, and biophysics access to an open-sourced, easy to use, and intuitive software for performing fine-grained geometric analysis of molecular systems. The library is capable of computing power diagrams (weighted Voronoi diagrams) in three dimensions with 3D periodic boundary conditions, computing approximate projective 2D Voronoi diagrams on arbitrarily defined surfaces, performing shape properties recognition using α-shape theory and can do exact Solvent Accessible Surface Area (SASA) computation. The software is written mainly as a template-based C++ library for greater performance, but a rich Python interface (pydmga) is provided as a convenient way to manipulate the DMG-α routines. To illustrate possible applications of the DMG-α library, we present results of sample analyses which allowed to determine nontrivial geometric properties of two Escherichia coli-specific lipids as emerging from molecular dynamics simulations of relevant model bilayers.
ERIC Educational Resources Information Center
Hebert, Francoise
1994-01-01
Describes a study that investigated the quality of interlibrary loan services in Canadian public libraries from the library's and the user's perspectives and then compared results. Measures of interlibrary loan performance are reviewed; an alternative conceptualization of service quality is discussed; and SERVQUAL, a measure of service quality, is…
Orientation and Functions of Library in Quality Education of College
ERIC Educational Resources Information Center
Yang, Lan
2011-01-01
Quality education is the core of college education. Libraries are the second class for students due to the extremely important position and function in quality education. Libraries are the best place for cultivating students' morals, the important front for improving students' scientific and cultural qualities, and the effective facilities for…
Highly Productive Application Development with ViennaCL for Accelerators
NASA Astrophysics Data System (ADS)
Rupp, K.; Weinbub, J.; Rudolf, F.
2012-12-01
The use of graphics processing units (GPUs) for the acceleration of general purpose computations has become very attractive over the last years, and accelerators based on many integrated CPU cores are about to hit the market. However, there are discussions about the benefit of GPU computing when comparing the reduction of execution times with the increased development effort [1]. To counter these concerns, our open-source linear algebra library ViennaCL [2,3] uses modern programming techniques such as generic programming in order to provide a convenient access layer for accelerator and GPU computing. Other GPU-accelerated libraries are primarily tuned for performance, but less tailored to productivity and portability: MAGMA [4] provides dense linear algebra operations via a LAPACK-comparable interface, but no dedicated matrix and vector types. Cusp [5] is closest in functionality to ViennaCL for sparse matrices, but is based on CUDA and thus restricted to devices from NVIDIA. However, no convenience layer for dense linear algebra is provided with Cusp. ViennaCL is written in C++ and uses OpenCL to access the resources of accelerators, GPUs and multi-core CPUs in a unified way. On the one hand, the library provides iterative solvers from the family of Krylov methods, including various preconditioners, for the solution of linear systems typically obtained from the discretization of partial differential equations. On the other hand, dense linear algebra operations are supported, including algorithms such as QR factorization and singular value decomposition. The user application interface of ViennaCL is compatible to uBLAS [6], which is part of the peer-reviewed Boost C++ libraries [7]. This allows to port existing applications based on uBLAS with a minimum of effort to ViennaCL. Conversely, the interface compatibility allows to use the iterative solvers from ViennaCL with uBLAS types directly, thus enabling code reuse beyond CPU-GPU boundaries. Out-of-the-box support for types from the Eigen library [8] and MTL 4 [9] are provided as well, enabling a seamless transition from single-core CPU to GPU and multi-core CPU computations. Case studies from the numerical solution of PDEs are given and isolated performance benchmarks are discussed. Also, pitfalls in scientific computing with GPUs and accelerators are addressed, allowing for a first evaluation of whether these novel devices can be mapped well to certain applications. References: [1] R. Bordawekar et al., Technical Report, IBM, 2010 [2] ViennaCL library. Online: http://viennacl.sourceforge.net/ [3] K. Rupp et al., GPUScA, 2010 [4] MAGMA library. Online: http://icl.cs.utk.edu/magma/ [5] Cusp library. Online: http://code.google.com/p/cusp-library/ [6] uBLAS library. Online: http://www.boost.org/libs/numeric/ublas/ [7] Boost C++ Libraries. Online: http://www.boost.org/ [8] Eigen library. Online: http://eigen.tuxfamily.org/ [9] MTL 4 Library. Online: http://www.mtl4.org/
ERIC Educational Resources Information Center
Masuchika, Glenn; Boldt, Gail
2010-01-01
American graphic novels are increasingly recognized as high-quality literature and an interesting genre for academic study. Graphic novels of Japan, called manga, have established a strong foothold in American culture. This preliminary survey of 44 United States university libraries demonstrates that Japanese manga in translation are consistently…
Library and information services: impact on patient care quality.
Marshall, Joanne Gard; Morgan, Jennifer Craft; Thompson, Cheryl A; Wells, Amber L
2014-01-01
The purpose of this paper is to explore library and information service impact on patient care quality. A large-scale critical incident survey of physicians and residents at 56 library sites serving 118 hospitals in the USA and Canada. Respondents were asked to base their answers on a recent incident in which they had used library resources to search for information related to a specific clinical case. Of 4,520 respondents, 75 percent said that they definitely or probably handled patient care differently using information obtained through the library. In a multivariate analysis, three summary clinical outcome measures were used as value and impact indicators: first, time saved; second, patient care changes; and third, adverse events avoided. The outcomes were examined in relation to four information access methods: first, asking librarian for assistance; second, performing search in a physical library; third, searching library's web site; or fourth, searching library resources on an institutional intranet. All library access methods had consistently positive relationships with the clinical outcomes, providing evidence that library services have a positive impact on patient care quality. Electronic collections and services provided by the library and the librarian contribute to patient care quality.
Scheduling language and algorithm development study. Appendix: Study approach and activity summary
NASA Technical Reports Server (NTRS)
1974-01-01
The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.
A Computer Analysis of Library Postcards. (CALP)
ERIC Educational Resources Information Center
Stevens, Norman D.
1974-01-01
A description of a sophisticated application of computer techniques to the analysis of a collection of picture postcards of library buildings in an attempt to establish the minimum architectural requirements needed to distinguish one style of library building from another. (Author)
ERIC Educational Resources Information Center
Uwaifo, Stephen Osahon
2008-01-01
Purpose: The paper seeks to examine the health risks faced when using computer-based systems by library staff in Nigerian libraries. Design/methodology/approach: The paper uses a survey research approach to carry out this investigation. Findings: The investigation reveals that the perceived health risk does not predict perceived ease of use of…
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
Robust DNA Isolation and High-throughput Sequencing Library Construction for Herbarium Specimens.
Saeidi, Saman; McKain, Michael R; Kellogg, Elizabeth A
2018-03-08
Herbaria are an invaluable source of plant material that can be used in a variety of biological studies. The use of herbarium specimens is associated with a number of challenges including sample preservation quality, degraded DNA, and destructive sampling of rare specimens. In order to more effectively use herbarium material in large sequencing projects, a dependable and scalable method of DNA isolation and library preparation is needed. This paper demonstrates a robust, beginning-to-end protocol for DNA isolation and high-throughput library construction from herbarium specimens that does not require modification for individual samples. This protocol is tailored for low quality dried plant material and takes advantage of existing methods by optimizing tissue grinding, modifying library size selection, and introducing an optional reamplification step for low yield libraries. Reamplification of low yield DNA libraries can rescue samples derived from irreplaceable and potentially valuable herbarium specimens, negating the need for additional destructive sampling and without introducing discernible sequencing bias for common phylogenetic applications. The protocol has been tested on hundreds of grass species, but is expected to be adaptable for use in other plant lineages after verification. This protocol can be limited by extremely degraded DNA, where fragments do not exist in the desired size range, and by secondary metabolites present in some plant material that inhibit clean DNA isolation. Overall, this protocol introduces a fast and comprehensive method that allows for DNA isolation and library preparation of 24 samples in less than 13 h, with only 8 h of active hands-on time with minimal modifications.
ERIC Educational Resources Information Center
McKimmie, Tim; Smith, Jeanette
1994-01-01
Presents an overview of the issues related to extremely low frequency (ELF) radiation from computer video display terminals. Highlights include electromagnetic fields; measuring ELF; computer use in libraries; possible health effects; electromagnetic radiation; litigation and legislation; standards and safety; and what libraries can do. (Contains…
Construction of a scFv Library with Synthetic, Non-combinatorial CDR Diversity.
Bai, Xuelian; Shim, Hyunbo
2017-01-01
Many large synthetic antibody libraries have been designed, constructed, and successfully generated high-quality antibodies suitable for various demanding applications. While synthetic antibody libraries have many advantages such as optimized framework sequences and a broader sequence landscape than natural antibodies, their sequence diversities typically are generated by random combinatorial synthetic processes which cause the incorporation of many undesired CDR sequences. Here, we describe the construction of a synthetic scFv library using oligonucleotide mixtures that contain predefined, non-combinatorially synthesized CDR sequences. Each CDR is first inserted to a master scFv framework sequence and the resulting single-CDR libraries are subjected to a round of proofread panning. The proofread CDR sequences are assembled to produce the final scFv library with six diversified CDRs.
Quality Management and Building Government Information Services.
ERIC Educational Resources Information Center
Farrell, Maggie
1998-01-01
Discusses serving library-patron needs in terms of customer service and quality control. Highlights include tools for measuring the quality of service (e.g., the SERVQUAL survey), advisory boards or focus groups, library "service statements," changing patron needs, new information formats, and justifying depository library services. (JAK)
SOI layout decomposition for double patterning lithography on high-performance computer platforms
NASA Astrophysics Data System (ADS)
Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir
2014-12-01
In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.
How enhanced molecular ions in Cold EI improve compound identification by the NIST library.
Alon, Tal; Amirav, Aviv
2015-12-15
Library-based compound identification with electron ionization (EI) mass spectrometry (MS) is a well-established identification method which provides the names and structures of sample compounds up to the isomer level. The library (such as NIST) search algorithm compares different EI mass spectra in the library's database with the measured EI mass spectrum, assigning each of them a similarity score called 'Match' and an overall identification probability. Cold EI, electron ionization of vibrationally cold molecules in supersonic molecular beams, provides mass spectra with all the standard EI fragment ions combined with enhanced Molecular Ions and high-mass fragments. As a result, Cold EI mass spectra differ from those provided by standard EI and tend to yield lower matching scores. However, in most cases, library identification actually improves with Cold EI, as library identification probabilities for the correct library mass spectra increase, despite the lower matching factors. This research examined the way that enhanced molecular ion abundances affect library identification probability and the way that Cold EI mass spectra, which include enhanced molecular ions and high-mass fragment ions, typically improve library identification results. It involved several computer simulations, which incrementally modified the relative abundances of the various ions and analyzed the resulting mass spectra. The simulation results support previous measurements, showing that while enhanced molecular ion and high-mass fragment ions lower the matching factor of the correct library compound, the matching factors of the incorrect library candidates are lowered even more, resulting in a rise in the identification probability for the correct compound. This behavior which was previously observed by analyzing Cold EI mass spectra can be explained by the fact that high-mass ions, and especially the molecular ion, characterize a compound more than low-mass ions and therefore carries more weight in library search identification algorithms. These ions are uniquely abundant in Cold EI, which therefore enables enhanced compound characterization along with improved NIST library based identification. Copyright © 2015 John Wiley & Sons, Ltd.
Donovan-Maiye, Rory M; Langmead, Christopher J; Zuckerman, Daniel M
2018-01-09
Motivated by the extremely high computing costs associated with estimates of free energies for biological systems using molecular simulations, we further the exploration of existing "belief propagation" (BP) algorithms for fixed-backbone peptide and protein systems. The precalculation of pairwise interactions among discretized libraries of side-chain conformations, along with representation of protein side chains as nodes in a graphical model, enables direct application of the BP approach, which requires only ∼1 s of single-processor run time after the precalculation stage. We use a "loopy BP" algorithm, which can be seen as an approximate generalization of the transfer-matrix approach to highly connected (i.e., loopy) graphs, and it has previously been applied to protein calculations. We examine the application of loopy BP to several peptides as well as the binding site of the T4 lysozyme L99A mutant. The present study reports on (i) the comparison of the approximate BP results with estimates from unbiased estimators based on the Amber99SB force field; (ii) investigation of the effects of varying library size on BP predictions; and (iii) a theoretical discussion of the discretization effects that can arise in BP calculations. The data suggest that, despite their approximate nature, BP free-energy estimates are highly accurate-indeed, they never fall outside confidence intervals from unbiased estimators for the systems where independent results could be obtained. Furthermore, we find that libraries of sufficiently fine discretization (which diminish library-size sensitivity) can be obtained with standard computing resources in most cases. Altogether, the extremely low computing times and accurate results suggest the BP approach warrants further study.
ERIC Educational Resources Information Center
Harris-Keith, Colleen Susan
2015-01-01
Though research into academic library director leadership has established leadership skills and qualities required for success, little research has been done to establish where in their career library directors were most likely to acquire those skills and qualities. This research project surveyed academic library directors at Carnegie-designated…
A new strategy for genome assembly using short sequence reads and reduced representation libraries.
Young, Andrew L; Abaan, Hatice Ozel; Zerbino, Daniel; Mullikin, James C; Birney, Ewan; Margulies, Elliott H
2010-02-01
We have developed a novel approach for using massively parallel short-read sequencing to generate fast and inexpensive de novo genomic assemblies comparable to those generated by capillary-based methods. The ultrashort (<100 base) sequences generated by this technology pose specific biological and computational challenges for de novo assembly of large genomes. To account for this, we devised a method for experimentally partitioning the genome using reduced representation (RR) libraries prior to assembly. We use two restriction enzymes independently to create a series of overlapping fragment libraries, each containing a tractable subset of the genome. Together, these libraries allow us to reassemble the entire genome without the need of a reference sequence. As proof of concept, we applied this approach to sequence and assembled the majority of the 125-Mb Drosophila melanogaster genome. We subsequently demonstrate the accuracy of our assembly method with meaningful comparisons against the current available D. melanogaster reference genome (dm3). The ease of assembly and accuracy for comparative genomics suggest that our approach will scale to future mammalian genome-sequencing efforts, saving both time and money without sacrificing quality.
Advanced Transport Operating System (ATOPS) utility library software description
NASA Technical Reports Server (NTRS)
Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.
1993-01-01
The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.
ERIC Educational Resources Information Center
Lancaster, F. Wilfrid, Ed.
In planning this ninth annual clinic an attempt was made to include papers on a wide range of library applications of on-line computers, as well as to include libraries of various types and various sizes. Two papers deal with on-line circulation control (the Ohio State University system, described by Hugh C. Atkinson, and the Northwestern…
Indoor air pollution and preventions in college libraries
NASA Astrophysics Data System (ADS)
Yang, Zengzhang
2017-05-01
The college library is a place where it gets the comparatively high density of students often staying long time with it. Therefore, the indoor air quality will affect directly reading effect and physical health of teachers and students in colleges and universities. The paper analyzes the influenced factors in indoor air pollution of the library from the selection of green-environmental decorating materials and furniture, good ventilation maintaining, electromagnetic radiation reducing, regular disinfection, indoor green building and awareness of health and environmental protection strengthening etc. six aspects to put forward the ideas for preventions of indoor air pollution and construction of the green low-carbon library.
NASA Astrophysics Data System (ADS)
Steinberg, P. D.; Bednar, J. A.; Rudiger, P.; Stevens, J. L. R.; Ball, C. E.; Christensen, S. D.; Pothina, D.
2017-12-01
The rich variety of software libraries available in the Python scientific ecosystem provides a flexible and powerful alternative to traditional integrated GIS (geographic information system) programs. Each such library focuses on doing a certain set of general-purpose tasks well, and Python makes it relatively simple to glue the libraries together to solve a wide range of complex, open-ended problems in Earth science. However, choosing an appropriate set of libraries can be challenging, and it is difficult to predict how much "glue code" will be needed for any particular combination of libraries and tasks. Here we present a set of libraries that have been designed to work well together to build interactive analyses and visualizations of large geographic datasets, in standard web browsers. The resulting workflows run on ordinary laptops even for billions of data points, and easily scale up to larger compute clusters when available. The declarative top-level interface used in these libraries means that even complex, fully interactive applications can be built and deployed as web services using only a few dozen lines of code, making it simple to create and share custom interactive applications even for datasets too large for most traditional GIS systems. The libraries we will cover include GeoViews (HoloViews extended for geographic applications) for declaring visualizable/plottable objects, Bokeh for building visual web applications from GeoViews objects, Datashader for rendering arbitrarily large datasets faithfully as fixed-size images, Param for specifying user-modifiable parameters that model your domain, Xarray for computing with n-dimensional array data, Dask for flexibly dispatching computational tasks across processors, and Numba for compiling array-based Python code down to fast machine code. We will show how to use the resulting workflow with static datasets and with simulators such as GSSHA or AdH, allowing you to deploy flexible, high-performance web-based dashboards for your GIS data or simulations without needing major investments in code development or maintenance.
NASA Astrophysics Data System (ADS)
Sublet, Jean-Christophe
2008-02-01
ENDF/B-VII.0, the first release of the ENDF/B-VII nuclear data library, was formally released in December 2006. Prior to this event the European JEFF-3.1 nuclear data library was distributed in April 2005, while the Japanese JENDL-3.3 library has been available since 2002. The recent releases of these neutron transport libraries and special purpose files, the updates of the processing tools and the significant progress in computer power and potency, allow today far better leaner Monte Carlo code and pointwise library integration leading to enhanced benchmarking studies. A TRIPOLI-4.4 critical assembly suite has been set up as a collection of 86 benchmarks taken principally from the International Handbook of Evaluated Criticality Benchmarks Experiments (2006 Edition). It contains cases for a variety of U and Pu fuels and systems, ranging from fast to deep thermal solutions and assemblies. It covers cases with a variety of moderators, reflectors, absorbers, spectra and geometries. The results presented show that while the most recent library ENDF/B-VII.0, which benefited from the timely development of JENDL-3.3 and JEFF-3.1, produces better overall results, it suggest clearly also that improvements are still needed. This is true in particular in Light Water Reactor applications for thermal and epithermal plutonium data for all libraries and fast uranium data for JEFF-3.1 and JENDL-3.3. It is also true to state that other domains, in which Monte Carlo code are been used, such as astrophysics, fusion, high-energy or medical, radiation transport in general benefit notably from such enhanced libraries. It is particularly noticeable in term of the number of isotopes, materials available, the overall quality of the data and the much broader energy range for which evaluated (as opposed to modeled) data are available, spanning from meV to hundreds of MeV. In pointing out the impact of the different nuclear data at the library but also the isotopic levels one could not help noticing the importance and difference of the compensating effects that result from their single usage. Library differences are still important but tend to diminish due to the ever increasing and beneficial worldwide collaboration in the field of nuclear data measurement and evaluations.
Cerutti, Guillaume; Ali, Olivier; Godin, Christophe
2017-01-01
Context: The shoot apical meristem (SAM), origin of all aerial organs of the plant, is a restricted niche of stem cells whose growth is regulated by a complex network of genetic, hormonal and mechanical interactions. Studying the development of this area at cell level using 3D microscopy time-lapse imaging is a newly emerging key to understand the processes controlling plant morphogenesis. Computational models have been proposed to simulate those mechanisms, however their validation on real-life data is an essential step that requires an adequate representation of the growing tissue to be carried out. Achievements: The tool we introduce is a two-stage computational pipeline that generates a complete 3D triangular mesh of the tissue volume based on a segmented tissue image stack. DRACO (Dual Reconstruction by Adjacency Complex Optimization) is designed to retrieve the underlying 3D topological structure of the tissue and compute its dual geometry, while STEM (SAM Tissue Enhanced Mesh) returns a faithful triangular mesh optimized along several quality criteria (intrinsic quality, tissue reconstruction, visual adequacy). Quantitative evaluation tools measuring the performance of the method along those different dimensions are also provided. The resulting meshes can be used as input and validation for biomechanical simulations. Availability: DRACO-STEM is supplied as a package of the open-source multi-platform plant modeling library OpenAlea (http://openalea.github.io/) implemented in Python, and is freely distributed on GitHub (https://github.com/VirtualPlants/draco-stem) along with guidelines for installation and use. PMID:28424704
The Student/Library Computer Science Collaborative
ERIC Educational Resources Information Center
Hahn, Jim
2015-01-01
With funding from an Institute of Museum and Library Services demonstration grant, librarians of the Undergraduate Library at the University of Illinois at Urbana-Champaign partnered with students in computer science courses to design and build student-centered mobile apps. The grant work called for demonstration of student collaboration…
Role of Computers in Sci-Tech Libraries.
ERIC Educational Resources Information Center
Bichteler, Julie; And Others
1986-01-01
Articles in this theme issue discuss applications of microcomputers in science/technology libraries, a UNIX-based online catalog, online versus print sources, computer-based statistics, and the applicability and implications of the Matheson-Cooper Report on health science centers for science/technology libraries. A bibliography of new reference…
Macintoshed Libraries 5. Fifth Edition.
ERIC Educational Resources Information Center
Valauskas, Edward J., Ed.; Vaccaro, Bill, Ed.
This annual collection contains 16 papers about the use of Macintosh computers in libraries which include: "New Horizons in Library Training: Using HyperCard for Computer-Based Staff Training" (Pauline S. Bayne and Joe C. Rader); "Get a Closet!" (Ron Berntson); "Current Periodicals: Subject Access the Mac Way"…
ERIC Educational Resources Information Center
Krzywkowski, Valerie I., Ed.
The 15 papers in this collection discuss various aspects of computer use in libraries and several other aspects of library service not directly related to computers. Following an introduction and a list of officers, the papers are: (1) "Criminal Justice and Related Databases" (Kate E. Adams); (2) "Software and Hard Thought:…
ERIC Educational Resources Information Center
Barker, Philip
1986-01-01
Discussion of developments in information storage technology likely to have significant impact upon library utilization focuses on hardware (videodisc technology) and software developments (knowledge databases; computer networks; database management systems; interactive video, computer, and multimedia user interfaces). Three generic computer-based…
Comfort with Computers in the Library.
ERIC Educational Resources Information Center
Agati, Joseph
2002-01-01
Sets forth a list of do's and don't's when integrating aesthetics, functionality, and technology into college library computer workstation furniture. The article discusses workstation access for both portable computer users and for staff, whose needs involve desktop computers that are possibly networked with printers and other peripherals. (GR)
High Throughput Genotoxicity Profiling of the US EPA ToxCast Chemical Library
A key aim of the ToxCast project is to investigate modern molecular and genetic high content and high throughput screening (HTS) assays, along with various computational tools to supplement and perhaps replace traditional assays for evaluating chemical toxicity. Genotoxicity is a...
Mass Spectral Library Quality Assurance by Inter-Library Comparison
NASA Astrophysics Data System (ADS)
Wallace, William E.; Ji, Weihua; Tchekhovskoi, Dmitrii V.; Phinney, Karen W.; Stein, Stephen E.
2017-04-01
A method to discover and correct errors in mass spectral libraries is described. Comparing across a set of highly curated reference libraries compounds that have the same chemical structure quickly identifies entries that are outliers. In cases where three or more entries for the same compound are compared, the outlier as determined by visual inspection was almost always found to contain the error. These errors were either in the spectrum itself or in the chemical descriptors that accompanied it. The method is demonstrated on finding errors in compounds of forensic interest in the NIST/EPA/NIH Mass Spectral Library. The target list of compounds checked was the Scientific Working Group for the Analysis of Seized Drugs (SWGDRUG) mass spectral library. Some examples of errors found are described. A checklist of errors that curators should look for when performing inter-library comparisons is provided.
Mass Spectral Library Quality Assurance by Inter-Library Comparison
Wallace, W.E.; Ji, W.; Tchekhovskoi, D.V.; Phinney, K.W.; Stein, S.E.
2017-01-01
A method to discover and correct errors in mass spectral libraries is described. Comparing across a set of highly curated reference libraries compounds that have the same chemical structure quickly identifies entries that are outliers. In cases where three or more entries for the same compound are compared the outlier as determined by visual inspection was almost always found to contain the error. These errors were either in the spectrum itself or in the chemical descriptors that accompanied it. The method is demonstrated on finding errors in compounds of forensic interest in the NIST/EPA/NIH Mass Spectral Library. The target list of compounds checked was the Scientific Working Group for the Analysis of Seized Drugs (SWGDRUG) mass spectral library. Some examples of errors found are described. A checklist of errors that curators should look for when performing inter-library comparisons is provided. PMID:28127680
A high-speed linear algebra library with automatic parallelism
NASA Technical Reports Server (NTRS)
Boucher, Michael L.
1994-01-01
Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.
Picture This... Developing Standards for Electronic Images at the National Library of Medicine
Masys, Daniel R.
1990-01-01
New computer technologies have made it feasible to represent, store, and communicate high resolution biomedical images via electronic means. Traditional two dimensional medical images such as those on printed pages have been supplemented by three dimensional images which can be rendered, rotated, and “dissected” from any point of view. The library of the future will provide electronic access not only to words and numbers, but to pictures, sounds, and other nontextual information. There currently exist few widely-accepted standards for the representation and communication of complex images, yet such standards will be critical to the feasibility and usefulness of digital image collections in the life sciences. The National Library of Medicine is embarked on a project to develop a complete digital volumetric representation of an adult human male and female. This “Visible Human Project” will address the issue of standards for computer representation of biological structure.
Harispe, Sébastien; Ranwez, Sylvie; Janaqi, Stefan; Montmain, Jacky
2014-03-01
The semantic measures library and toolkit are robust open-source and easy to use software solutions dedicated to semantic measures. They can be used for large-scale computations and analyses of semantic similarities between terms/concepts defined in terminologies and ontologies. The comparison of entities (e.g. genes) annotated by concepts is also supported. A large collection of measures is available. Not limited to a specific application context, the library and the toolkit can be used with various controlled vocabularies and ontology specifications (e.g. Open Biomedical Ontology, Resource Description Framework). The project targets both designers and practitioners of semantic measures providing a JAVA library, as well as a command-line tool that can be used on personal computers or computer clusters. Downloads, documentation, tutorials, evaluation and support are available at http://www.semantic-measures-library.org.
Measuring Customer Satisfaction and Quality of Service in Special Libraries.
ERIC Educational Resources Information Center
White, Marilyn Domas; Abels, Eileen G.; Nitecki, Danuta
This project tested the appropriateness of SERVQUAL (i.e., an instrument widely used in the service industry for assessing service quality based on repeated service encounters rather than a particular service encounter) to measure service quality in special libraries and developed a modified version for special libraries. SERVQUAL is based on an…
Wireless Computing in the Library: A Successful Model at St. Louis Community College.
ERIC Educational Resources Information Center
Patton, Janice K.
2001-01-01
Describes the St. Louis Community College (Missouri) library's use of laptop computers in the instruction lab as a way to save space and wiring costs. Discusses the pros and cons of wireless library instruction-advantages include its flexibility and its ability to eliminate cabling. (NB)
MISR HDF-to-Binary Converter and Radiance/BRF Calculation Tools
Atmospheric Science Data Center
2013-04-01
... to have the HDF and HDF-EOS libraries for the target computer. The HDF libraries are available from The HDF Group (THG) . The ... and the HDF-EOS include and library files on the target computer. The following files are included in the distribution tar file for ...
Personal Computing and Academic Library Design.
ERIC Educational Resources Information Center
Bazillion, Richard J.
1992-01-01
Notebook computers of increasing power and portability offer unique advantages to library users. Connecting easily to a campus data network, they are small silent work stations capable of drawing information from a variety of sources. Designers of new library buildings may assume that users in growing numbers will carry these multipurpose…
ERIC Educational Resources Information Center
Nixon, Carol, Comp.
This book contains the presentations of the 16th annual Computers in Libraries Conference. Contents include: "Creating New Services & Opportunities through Web Databases"; "Influencing Change and Student Learning through Staff Development"; "Top Ten Navigation Tips"; "Library of the Year: Gwinnett County…
RNA-Seq for Bacterial Gene Expression.
Poulsen, Line Dahl; Vinther, Jeppe
2018-06-01
RNA sequencing (RNA-seq) has become the preferred method for global quantification of bacterial gene expression. With the continued improvements in sequencing technology and data analysis tools, the most labor-intensive and expensive part of an RNA-seq experiment is the preparation of sequencing libraries, which is also essential for the quality of the data obtained. Here, we present a straightforward and inexpensive basic protocol for preparation of strand-specific RNA-seq libraries from bacterial RNA as well as a computational pipeline for the data analysis of sequencing reads. The protocol is based on the Illumina platform and allows easy multiplexing of samples and the removal of sequencing reads that are PCR duplicates. © 2018 by John Wiley & Sons, Inc. © 2018 John Wiley & Sons, Inc.
ERIC Educational Resources Information Center
Clyde, Anne
1999-01-01
Discussion of the Year 2000 (Y2K) problem, the computer-code problem that affects computer programs or computer chips, focuses on the impact on teacher-librarians. Topics include automated library systems, access to online information services, library computers and software, and other electronic equipment such as photocopiers and fax machines.…
High-performance computational fluid dynamics: a custom-code approach
NASA Astrophysics Data System (ADS)
Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.
2016-07-01
We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.
Business Students' Perception of University Library Service Quality and Satisfaction
ERIC Educational Resources Information Center
Hsu, Maxwell K.; Cummings, Richard G.; Wang, Stephen W.
2014-01-01
The main purpose of this study is to examine the college students' perception of library services, and to what extent the quality of library services influences students' satisfaction. The findings depict the relationship between academic libraries and their users in today's digital world and identify critical factors that may sustain a viable…
Academic Library Spaces: Advancing Student Success and Helping Students Thrive
ERIC Educational Resources Information Center
Spencer, Mary Ellen; Watstein, Sarah Barbara
2017-01-01
Are today's academic libraries really designed for learning? Do library spaces impact student learning? Intending to spark broader and more informed dialogue about the relationship between the quality of learning and the quality of academic library spaces in higher education, the authors consider the concept of space as service; student learning…
Dasari, Surendra; Chambers, Matthew C.; Martinez, Misti A.; Carpenter, Kristin L.; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo J.; Tabb, David L.
2012-01-01
Spectral libraries have emerged as a viable alternative to protein sequence databases for peptide identification. These libraries contain previously detected peptide sequences and their corresponding tandem mass spectra (MS/MS). Search engines can then identify peptides by comparing experimental MS/MS scans to those in the library. Many of these algorithms employ the dot product score for measuring the quality of a spectrum-spectrum match (SSM). This scoring system does not offer a clear statistical interpretation and ignores fragment ion m/z discrepancies in the scoring. We developed a new spectral library search engine, Pepitome, which employs statistical systems for scoring SSMs. Pepitome outperformed the leading library search tool, SpectraST, when analyzing data sets acquired on three different mass spectrometry platforms. We characterized the reliability of spectral library searches by confirming shotgun proteomics identifications through RNA-Seq data. Applying spectral library and database searches on the same sample revealed their complementary nature. Pepitome identifications enabled the automation of quality analysis and quality control (QA/QC) for shotgun proteomics data acquisition pipelines. PMID:22217208
Software Quality Assurance and Verification for the MPACT Library Generation Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yuxuan; Williams, Mark L.; Wiarda, Dorothea
This report fulfills the requirements for the Consortium for the Advanced Simulation of Light-Water Reactors (CASL) milestone L2:RTM.P14.02, “SQA and Verification for MPACT Library Generation,” by documenting the current status of the software quality, verification, and acceptance testing of nuclear data libraries for MPACT. It provides a brief overview of the library generation process, from general-purpose evaluated nuclear data files (ENDF/B) to a problem-dependent cross section library for modeling of light-water reactors (LWRs). The software quality assurance (SQA) programs associated with each of the software used to generate the nuclear data libraries are discussed; specific tests within the SCALE/AMPX andmore » VERA/XSTools repositories are described. The methods and associated tests to verify the quality of the library during the generation process are described in detail. The library generation process has been automated to a degree to (1) ensure that it can be run without user intervention and (2) to ensure that the library can be reproduced. Finally, the acceptance testing process that will be performed by representatives from the Radiation Transport Methods (RTM) Focus Area prior to the production library’s release is described in detail.« less
Shinozuka, Hiroshi; Forster, John W
2016-01-01
Background. Multiplexed sequencing is commonly performed on massively parallel short-read sequencing platforms such as Illumina, and the efficiency of library normalisation can affect the quality of the output dataset. Although several library normalisation approaches have been established, none are ideal for highly multiplexed sequencing due to issues of cost and/or processing time. Methods. An inexpensive and high-throughput library quantification method has been developed, based on an adaptation of the melting curve assay. Sequencing libraries were subjected to the assay using the Bio-Rad Laboratories CFX Connect(TM) Real-Time PCR Detection System. The library quantity was calculated through summation of reduction of relative fluorescence units between 86 and 95 °C. Results.PCR-enriched sequencing libraries are suitable for this quantification without pre-purification of DNA. Short DNA molecules, which ideally should be eliminated from the library for subsequent processing, were differentiated from the target DNA in a mixture on the basis of differences in melting temperature. Quantification results for long sequences targeted using the melting curve assay were correlated with those from existing methods (R (2) > 0.77), and that observed from MiSeq sequencing (R (2) = 0.82). Discussion.The results of multiplexed sequencing suggested that the normalisation performance of the described method is equivalent to that of another recently reported high-throughput bead-based method, BeNUS. However, costs for the melting curve assay are considerably lower and processing times shorter than those of other existing methods, suggesting greater suitability for highly multiplexed sequencing applications.
The Integrated Library System Design Concepts for a Complete Serials Control Subsystem.
1984-08-20
7AD-fl149 379 THE INTEGRTED LIBRARY SYSTEM DESIGN CONCEPTS FOR A 1/COMPLETE SERIALS CONTROL UBSYSTEM(U) ONLINE COMPUTER SYSTEMS INC GERMANTOWN MD 28...CONTROL SUBSYSTEM Presented to: The Pentagon Library The Pentagon Washington, DC 20310 Prepared by: Online Computer Systems, Inc. 20251 Century Blvd...MDA903-82-C-0535 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT, PROJECT, TASK AREA & WORK UNIT NUMBERS Online Computer Systems, Inc
NASA Astrophysics Data System (ADS)
Jiang, Jingtao; Sui, Rendong; Shi, Yan; Li, Furong; Hu, Caiqi
In this paper 3-D models of combined fixture elements are designed, classified by their functions, and saved in computer as supporting elements library, jointing elements library, basic elements library, localization elements library, clamping elements library, and adjusting elements library etc. Then automatic assembly of 3-D combined checking fixture for auto-body part is presented based on modularization theory. And in virtual auto-body assembly space, Locating constraint mapping technique and assembly rule-based reasoning technique are used to calculate the position of modular elements according to localization points and clamp points of auto-body part. Auto-body part model is transformed from itself coordinate system space to virtual assembly space by homogeneous transformation matrix. Automatic assembly of different functional fixture elements and auto-body part is implemented with API function based on the second development of UG. It is proven in practice that the method in this paper is feasible and high efficiency.
Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozacik, Stephen
Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.
Computers and Library Management.
ERIC Educational Resources Information Center
Cooke, Deborah M.; And Others
1985-01-01
This five-article section discusses changes in the management of the school library resulting from use of the computer. Topics covered include data management programs (record keeping, word processing, and bibliographies); practical applications of a database; evaluation of "Circulation Plus" software; ergonomics and computers; and…
Implementing Computer-Based Training for Library Staff.
ERIC Educational Resources Information Center
Bayne, Pauline S.; And Others
1994-01-01
Describes a computer-based training program for library staff developed at the University of Tennessee, Knoxville, that used HyperCard stacks on Macintosh computers. Highlights include staff involvement; evaluation of modules; trainee participation and feedback; staff recognition; administrative support; implementation plan; supervisory…
Visualization Software for VisIT Java Client
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billings, Jay Jay; Smith, Robert W
The VisIT Java Client (JVC) library is a lightweight thin client that is designed and written purely in the native language of Java (the Python & JavaScript versions of the library use the same concept) and communicates with any new unmodified standalone version of VisIT, a high performance computing parallel visualization toolkit, over traditional or web sockets and dynamically determines capabilities of the running VisIT instance whether local or remote.
ERIC Educational Resources Information Center
Lourey, Eugene D., Comp.
The Minnesota Computer Aided Library System (MCALS) provides a basis of unification for library service program development in Minnesota for eventual linkage to the national information network. A prototype plan for communications functions is illustrated. A cost/benefits analysis was made to show the cost/effectiveness potential for MCALS. System…
Computer Output Microform Library Catalog: A Survey.
ERIC Educational Resources Information Center
Zink, Steven D.
This discussion of the use of computer output microform (COM) as a feasible alternative to the library card catalog includes a brief history of library catalogs and of microform technology since World War II. It is argued that COM catalogs are to be preferred to card catalogs, online catalogs accessed by terminals, and paper printouts. Advantages…
In-House Automation of a Small Library Using a Mainframe Computer.
ERIC Educational Resources Information Center
Waranius, Frances B.; Tellier, Stephen H.
1986-01-01
An automated library routine management system was developed in-house to create system unique to the Library and Information Center, Lunar and Planetary Institute, Houston, Texas. A modular approach was used to allow continuity in operations and services as system was implemented. Acronyms and computer accounts and file names are appended.…
Microform Catalogs: A Viable Alternative for Texas Libraries.
ERIC Educational Resources Information Center
Cox, Carolyn, M.; Juergens, Bonnie
This project proposed to develop and test the use of microform catalogs produced from computer-generated magnetic tape records in both fiche and film formats. The Computer Output Microform (COM) catalog developed for this purpose is a union list of titles from the five participating libraries--Houston and Dallas Public Libraries, Texas State…
Library and Classroom Use of Copyrighted Videotapes and Computer Software.
ERIC Educational Resources Information Center
Reed, Mary Hutchings; Stanek, Debra
Designed to provide guidance for librarians, this publication expresses the opinion of the legal counsel of the American Library Association (ALA) regarding library and classroom use of copyrighted videotapes and computer programs. A discussion of videotapes considers the impact of the Copyright Revision Act of 1976 on in-classroom use, in-library…
Selecting, Acquiring, and Using Small Molecule Libraries for High-Throughput Screening
Dandapani, Sivaraman; Rosse, Gerard; Southall, Noel; Salvino, Joseph M.; Thomas, Craig J.
2015-01-01
The selection, acquisition and use of high quality small molecule libraries for screening is an essential aspect of drug discovery and chemical biology programs. Screening libraries continue to evolve as researchers gain a greater appreciation of the suitability of small molecules for specific biological targets, processes and environments. The decisions surrounding the make-up of any given small molecule library is informed by a multitude of variables and opinions vary on best-practices. The fitness of any collection relies upon upfront filtering to avoiding problematic compounds, assess appropriate physicochemical properties, install the ideal level of structural uniqueness and determine the desired extent of molecular complexity. These criteria are under constant evaluation and revision as academic and industrial organizations seek out collections that yield ever improving results from their screening portfolios. Practical questions including cost, compound management, screening sophistication and assay objective also play a significant role in the choice of library composition. This overview attempts to offer advice to all organizations engaged in small molecule screening based upon current best practices and theoretical considerations in library selection and acquisition. PMID:26705509
Selecting, Acquiring, and Using Small Molecule Libraries for High-Throughput Screening.
Dandapani, Sivaraman; Rosse, Gerard; Southall, Noel; Salvino, Joseph M; Thomas, Craig J
The selection, acquisition and use of high quality small molecule libraries for screening is an essential aspect of drug discovery and chemical biology programs. Screening libraries continue to evolve as researchers gain a greater appreciation of the suitability of small molecules for specific biological targets, processes and environments. The decisions surrounding the make-up of any given small molecule library is informed by a multitude of variables and opinions vary on best-practices. The fitness of any collection relies upon upfront filtering to avoiding problematic compounds, assess appropriate physicochemical properties, install the ideal level of structural uniqueness and determine the desired extent of molecular complexity. These criteria are under constant evaluation and revision as academic and industrial organizations seek out collections that yield ever improving results from their screening portfolios. Practical questions including cost, compound management, screening sophistication and assay objective also play a significant role in the choice of library composition. This overview attempts to offer advice to all organizations engaged in small molecule screening based upon current best practices and theoretical considerations in library selection and acquisition.
DNA-Encoded Solid-Phase Synthesis: Encoding Language Design and Complex Oligomer Library Synthesis.
MacConnell, Andrew B; McEnaney, Patrick J; Cavett, Valerie J; Paegel, Brian M
2015-09-14
The promise of exploiting combinatorial synthesis for small molecule discovery remains unfulfilled due primarily to the "structure elucidation problem": the back-end mass spectrometric analysis that significantly restricts one-bead-one-compound (OBOC) library complexity. The very molecular features that confer binding potency and specificity, such as stereochemistry, regiochemistry, and scaffold rigidity, are conspicuously absent from most libraries because isomerism introduces mass redundancy and diverse scaffolds yield uninterpretable MS fragmentation. Here we present DNA-encoded solid-phase synthesis (DESPS), comprising parallel compound synthesis in organic solvent and aqueous enzymatic ligation of unprotected encoding dsDNA oligonucleotides. Computational encoding language design yielded 148 thermodynamically optimized sequences with Hamming string distance ≥ 3 and total read length <100 bases for facile sequencing. Ligation is efficient (70% yield), specific, and directional over 6 encoding positions. A series of isomers served as a testbed for DESPS's utility in split-and-pool diversification. Single-bead quantitative PCR detected 9 × 10(4) molecules/bead and sequencing allowed for elucidation of each compound's synthetic history. We applied DESPS to the combinatorial synthesis of a 75,645-member OBOC library containing scaffold, stereochemical and regiochemical diversity using mixed-scale resin (160-μm quality control beads and 10-μm screening beads). Tandem DNA sequencing/MALDI-TOF MS analysis of 19 quality control beads showed excellent agreement (<1 ppt) between DNA sequence-predicted mass and the observed mass. DESPS synergistically unites the advantages of solid-phase synthesis and DNA encoding, enabling single-bead structural elucidation of complex compounds and synthesis using reactions normally considered incompatible with unprotected DNA. The widespread availability of inexpensive oligonucleotide synthesis, enzymes, DNA sequencing, and PCR make implementation of DESPS straightforward, and may prompt the chemistry community to revisit the synthesis of more complex and diverse libraries.
Evaluating One-Shot Library Sessions: Impact on the Quality and Diversity of Student Source Use
ERIC Educational Resources Information Center
Howard, Kristina; Nicholas, Thomas; Hayes, Tish; Appelt, Christopher W.
2014-01-01
This article examines the presumption that library research workshops will increase the quality, quantity and diversity of sources students use. This study compares bibliographies of research papers written by freshman composition students who received a library research session to those of students who did not receive any library instruction. Our…
As Good as New: Recycled Computers Are a Boon to Cash-Strapped Schools
ERIC Educational Resources Information Center
Minkel, Walter
2004-01-01
There's no such thing as a school or library with too many computers. We're still nowhere near the one-student, one-computer ratio that's ideal for our schools and libraries, especially those in neighborhoods where most students don't have home computers. That's why Computers for Schools (www.pcsforschools.org) is such a great idea. The nonprofit…
Using SERVQUAL in health libraries across Somerset, Devon and Cornwall.
Martin, Susan
2003-03-01
This study provides the results of a survey conducted in the autumn of 2001 by ten NHS library services across Somerset, Devon and Cornwall. The aim of the project was to measure the service quality of each individual library and to provide an overall picture of the quality of library services within the south-west peninsula. The survey was based on SERVQUAL, a diagnostic tool developed in the 1980s, which measures service quality in terms of customer expectations and perceptions of service. The survey results have provided the librarians with a wealth of information about service quality. The service as a whole is perceived to be not only meeting but also exceeding expectations in terms of reliability, responsiveness, empathy and assurance. For the first time, the ten health library services can measure their own service quality as well as benchmark themselves against others.
ODIN system technology module library, 1972 - 1973
NASA Technical Reports Server (NTRS)
Hague, D. S.; Watson, D. A.; Glatt, C. R.; Jones, R. T.; Galipeau, J.; Phoa, Y. T.; White, R. J.
1978-01-01
ODIN/RLV is a digital computing system for the synthesis and optimization of reusable launch vehicle preliminary designs. The system consists of a library of technology modules in the form of independent computer programs and an executive program, ODINEX, which operates on the technology modules. The technology module library contains programs for estimating all major military flight vehicle system characteristics, for example, geometry, aerodynamics, economics, propulsion, inertia and volumetric properties, trajectories and missions, steady state aeroelasticity and flutter, and stability and control. A general system optimization module, a computer graphics module, and a program precompiler are available as user aids in the ODIN/RLV program technology module library.
Comparative Genomics as a Foundation for Evo-Devo Studies in Birds.
Grayson, Phil; Sin, Simon Y W; Sackton, Timothy B; Edwards, Scott V
2017-01-01
Developmental genomics is a rapidly growing field, and high-quality genomes are a useful foundation for comparative developmental studies. A high-quality genome forms an essential reference onto which the data from numerous assays and experiments, including ChIP-seq, ATAC-seq, and RNA-seq, can be mapped. A genome also streamlines and simplifies the development of primers used to amplify putative regulatory regions for enhancer screens, cDNA probes for in situ hybridization, microRNAs (miRNAs) or short hairpin RNAs (shRNA) for RNA interference (RNAi) knockdowns, mRNAs for misexpression studies, and even guide RNAs (gRNAs) for CRISPR knockouts. Finally, much can be gleaned from comparative genomics alone, including the identification of highly conserved putative regulatory regions. This chapter provides an overview of laboratory and bioinformatics protocols for DNA extraction, library preparation, library quantification, and genome assembly, from fresh or frozen tissue to a draft avian genome. Generating a high-quality draft genome can provide a developmental research group with excellent resources for their study organism, opening the doors to many additional assays and experiments.
A Survey of Navigational Computer Program in the Museum Setting.
ERIC Educational Resources Information Center
Sliman, Paula
Patron service is a high priority in the library setting and alleviating a large percentage of the directional questions will provide librarians with more time to help patrons more thoroughly than they are able to currently. Furthermore, in view of the current economic trend of downsizing, a navigational computer system program has the potential…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, L; Zhu, L; Vedantham, S
Purpose: Scatter contamination is detrimental to image quality in dedicated cone-beam breast CT (CBBCT), resulting in cupping artifacts and loss of contrast in reconstructed images. Such effects impede visualization of breast lesions and the quantitative accuracy. Previously, we proposed a library-based software approach to suppress scatter on CBBCT images. In this work, we quantify the efficacy and stability of this approach using datasets from 15 human subjects. Methods: A pre-computed scatter library is generated using Monte Carlo simulations for semi-ellipsoid breast models and homogeneous fibroglandular/adipose tissue mixture encompassing the range reported in literature. Projection datasets from 15 human subjects thatmore » cover 95 percentile of breast dimensions and fibroglandular volume fraction were included in the analysis. Our investigations indicate that it is sufficient to consider the breast dimensions alone and variation in fibroglandular fraction does not significantly affect the scatter-to-primary ratio. The breast diameter is measured from a first-pass reconstruction; the appropriate scatter distribution is selected from the library; and, deformed by considering the discrepancy in total projection intensity between the clinical dataset and the simulated semi-ellipsoidal breast. The deformed scatter-distribution is subtracted from the measured projections for scatter correction. Spatial non-uniformity (SNU) and contrast-to-noise ratio (CNR) were used as quantitative metrics to evaluate the results. Results: On the 15 patient cases, our method reduced the overall image spatial non-uniformity (SNU) from 7.14%±2.94% (mean ± standard deviation) to 2.47%±0.68% in coronal view and from 10.14%±4.1% to 3.02% ±1.26% in sagittal view. The average contrast to noise ratio (CNR) improved by a factor of 1.49±0.40 in coronal view and by 2.12±1.54 in sagittal view. Conclusion: We demonstrate the robustness and effectiveness of a library-based scatter correction method using patient datasets with large variability in breast dimensions and composition. The high computational efficiency and simplicity in implementation make this attractive for clinical implementation. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
[Construction and characterization of a cDNA library from human liver tissue of cirrhosis].
Chen, Xiao-hong; Chen, Zhi; Chen, Feng; Zhu, Hai-hong; Zhou, Hong-juan; Yao, Hang-ping
2005-03-01
To construct a cDNA library from human liver tissue of cirrhosis. The total RNA from human liver tissue of cirrhosis was extracted using Trizol method, and the mRNA was purified using mRNA purification kit. SMART technique and CDSIII/3' primer were used for first-strand cDNA synthesis. Long distance PCR was then used to synthesize the double-strand cDNA that was then digested by proteinase K and Sfi I, and was fractionated by CHOMA SPIN-400 column. The cDNA fragments longer than 0.4 kb were collected and ligated to lambdaTripl Ex2 vector. Then lambda-phage packaging reaction and library amplification were performed. The qualities of both unamplified and amplified cDNA libraries was strictly checked by conventional titer determination. Eleven plaques were randomly picked and tested using PCR with universal primers derived from the sequence flanking the vector. The titers of unamplifed and amplified libraries were 1.03 x 10(6) pfu/ml and 1.36 x 10(9) pfu/ml respectively. The percentages of recombinants from both libraries were 97.24 % in unamplified library and 99.02 % in amplified library. The lengths of the inserts were 1.02 kb in average (36.36 % 1 approximately equals 2 kb and 63.64 % 0.5 approximately equals 1.0 kb). A high quality cDNA library from human liver tissue of cirrhosis was constructed successfully, which can be used for screening and cloning new special genes associated with the occurrence of cirrhosis.
Library based x-ray scatter correction for dedicated cone beam breast CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correctionmore » on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging.« less
Library based x-ray scatter correction for dedicated cone beam breast CT
Shi, Linxi; Karellas, Andrew; Zhu, Lei
2016-01-01
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870
Rating the Quality of Open Textbooks: How Reviewer and Text Characteristics Predict Ratings
ERIC Educational Resources Information Center
Fischer, Lane; Ernst, David; Mason, Stacie
2017-01-01
Using data collected from peer reviews for Open Textbook Library titles, this paper explores questions about rating the quality of open textbooks. The five research questions addressed the relationship between textbook and reviewer characteristics and ratings. Although reviewers gave textbooks high ratings generally, reviewers identified…
Where the Cloud Meets the Commons
ERIC Educational Resources Information Center
Ipri, Tom
2011-01-01
Changes presented by cloud computing--shared computing services, applications, and storage available to end users via the Internet--have the potential to seriously alter how libraries provide services, not only remotely, but also within the physical library, specifically concerning challenges facing the typical desktop computing experience.…
Quality requirements for EHR archetypes.
Kalra, Dipak; Tapuria, Archana; Austin, Tony; De Moor, Georges
2012-01-01
The realisation of semantic interoperability, in which any EHR data may be communicated between heterogeneous systems and fully understood by computers as well as people on receipt, is a challenging goal. Despite the use of standardised generic models for the EHR and standard terminology systems, too much optionality and variability exists in how particular clinical entries may be represented. Clinical archetypes provide a means of defining how generic models should be shaped and bound to terminology for specific kinds of clinical data. However, these will only contribute to semantic interoperability if libraries of archetypes can be built up consistently. This requires the establishment of design principles, editorial and governance policies, and further research to develop ways for archetype authors to structure clinical data and to use terminology consistently. Drawing on several years of work within communities of practice developing archetypes and implementing systems from them, this paper presents quality requirements for the development of archetypes. Clinical engagement on a wide scale is also needed to help grow libraries of good quality archetypes that can be certified. Vendor and eHealth programme engagement is needed to validate such archetypes and achieve safe, meaningful exchange of EHR data between systems.
Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing
NASA Astrophysics Data System (ADS)
Tang, Jingyin; Matyas, Corene J.
2018-02-01
Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grant, Robert
Under this grant, three significant software packages were developed or improved, all with the goal of improving the ease-of-use of HPC libraries. The first component is a Python package, named DistArray (originally named Odin), that provides a high-level interface to distributed array computing. This interface is based on the popular and widely used NumPy package and is integrated with the IPython project for enhanced interactive parallel distributed computing. The second Python package is the Distributed Array Protocol (DAP) that enables separate distributed array libraries to share arrays efficiently without copying or sending messages. If a distributed array library supports themore » DAP, it is then automatically able to communicate with any other library that also supports the protocol. This protocol allows DistArray to communicate with the Trilinos library via PyTrilinos, which was also enhanced during this project. A third package, PyTrilinos, was extended to support distributed structured arrays (in addition to the unstructured arrays of its original design), allow more flexible distributed arrays (i.e., the restriction to double precision data was lifted), and implement the DAP. DAP support includes both exporting the protocol so that external packages can use distributed Trilinos data structures, and importing the protocol so that PyTrilinos can work with distributed data from external packages.« less
NASA Technical Reports Server (NTRS)
Blakely, R. L.
1973-01-01
A G189A simulation of the shuttle orbiter EC/lSS was prepared and used to study payload support capabilities. Two master program libraries of the G189A computer program were prepared for the NASA/JSC computer system. Several new component subroutines were added to the G189A program library and many existing subroutines were revised to improve their capabilities. A number of special analyses were performed in support of a NASA/JSC shuttle orbiter EC/LSS payload support capability study.
Introduction to Library Public Services. Sixth Edition. Library and Information Science Text Series.
ERIC Educational Resources Information Center
Evans, G. Edward; Amodeo, Anthony J.; Carter, Thomas L.
This book covers the role, purpose, and philosophy related to each of the major functional areas of library public service. This sixth edition, on the presumption that most people know the basic facts about computer hardware, does not include the chapter (in the previous edition) on computer basics, and instead integrated specific technological…
The Electronic Library: The Student/Scholar Workstation, CD-ROM and Hypertext.
ERIC Educational Resources Information Center
Triebwasser, Marc A.
Predicting that a large component of the library of the not so distant future will be an electronic network of file servers where information is stored for access by personal computer workstations in remote locations as well as the library, this paper discusses innovative computer technologies--particularly CD-ROM (Compact Disk-Read Only Memory)…
Y2K Resources for Public Libraries.
ERIC Educational Resources Information Center
Foster, Janet
1999-01-01
Presents information for public libraries on computer-related vulnerabilities as the century turns from 1999 to 2000. Highlights include: general Y2K information; the Y2K Bug and PCs; Y2K sites for librarians; Online Computer Library Center (OCLC) and USMARC; technological developments in cyberspace; and a list of Web sites and Y2K resources. (AEF)
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Drawing Blueprints for "Pulsing Content" Libraries
ERIC Educational Resources Information Center
Chudnov, Daniel
2007-01-01
This column presents a continuation of an article published in last month's issue of "Libraries in Computers," in which the author began to consider what it might mean to build dynamic, instantaneous libraries based solely on the materials held on the computers of people in the same space. The goal of this column is to think about what libraries…
Computer Use and Factors Related to Computer Use in Large Independent Secondary School Libraries.
ERIC Educational Resources Information Center
Currier, Heidi F.
Survey results about the use of computers in independent secondary school libraries are reported, and factors related to the presence of computers are identified. Data are from 104 librarians responding to a questionnaire sent to a sample of 136 large (over 400 students) independent secondary schools. Data are analyzed descriptively to show the…
ERIC Educational Resources Information Center
Shaw, David C.; Johnson, Dorothy M.
The complete comprehension of this paper requires a firm grasp of both mathematical demography and FORTRAN programming. The paper aims at the establishment of a language with which complex demographic manipulations can be briefly expressed in a form intelligible both to demographic analysts and to computers. The Demographic Computer Library (DCL)…
2010-01-01
Background Little genomic or trancriptomic information on Ganoderma lucidum (Lingzhi) is known. This study aims to discover the transcripts involved in secondary metabolite biosynthesis and developmental regulation of G. lucidum using an expressed sequence tag (EST) library. Methods A cDNA library was constructed from the G. lucidum fruiting body. Its high-quality ESTs were assembled into unique sequences with contigs and singletons. The unique sequences were annotated according to sequence similarities to genes or proteins available in public databases. The detection of simple sequence repeats (SSRs) was preformed by online analysis. Results A total of 1,023 clones were randomly selected from the G. lucidum library and sequenced, yielding 879 high-quality ESTs. These ESTs showed similarities to a diverse range of genes. The sequences encoding squalene epoxidase (SE) and farnesyl-diphosphate synthase (FPS) were identified in this EST collection. Several candidate genes, such as hydrophobin, MOB2, profilin and PHO84 were detected for the first time in G. lucidum. Thirteen (13) potential SSR-motif microsatellite loci were also identified. Conclusion The present study demonstrates a successful application of EST analysis in the discovery of transcripts involved in the secondary metabolite biosynthesis and the developmental regulation of G. lucidum. PMID:20230644
The effects of variable sample biomass on comparative metagenomics.
Chafee, Meghan; Maignien, Loïs; Simmons, Sheri L
2015-07-01
Longitudinal studies that integrate samples with variable biomass are essential to understand microbial community dynamics across space or time. Shotgun metagenomics is widely used to investigate these communities at the functional level, but little is known about the effects of combining low and high biomass samples on downstream analysis. We investigated the interacting effects of DNA input and library amplification by polymerase chain reaction on comparative metagenomic analysis using dilutions of a single complex template from an Arabidopsis thaliana-associated microbial community. We modified the Illumina Nextera kit to generate high-quality large-insert (680 bp) paired-end libraries using a range of 50 pg to 50 ng of input DNA. Using assembly-based metagenomic analysis, we demonstrate that DNA input level has a significant impact on community structure due to overrepresentation of low-GC genomic regions following library amplification. In our system, these differences were largely superseded by variations between biological replicates, but our results advocate verifying the influence of library amplification on a case-by-case basis. Overall, this study provides recommendations for quality filtering and de-replication prior to analysis, as well as a practical framework to address the issue of low biomass or biomass heterogeneity in longitudinal metagenomic surveys. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.
Kaga, Chiaki; Okochi, Mina; Tomita, Yasuyuki; Kato, Ryuji; Honda, Hiroyuki
2008-03-01
We developed a method of effective peptide screening that combines experiments and computational analysis. The method is based on the concept that screening efficiency can be enhanced from even limited data by use of a model derived from computational analysis that serves as a guide to screening and combining the model with subsequent repeated experiments. Here we focus on cell-adhesion peptides as a model application of this peptide-screening strategy. Cell-adhesion peptides were screened by use of a cell-based assay of a peptide array. Starting with the screening data obtained from a limited, random 5-mer library (643 sequences), a rule regarding structural characteristics of cell-adhesion peptides was extracted by fuzzy neural network (FNN) analysis. According to this rule, peptides with unfavored residues in certain positions that led to inefficient binding were eliminated from the random sequences. In the restricted, second random library (273 sequences), the yield of cell-adhesion peptides having an adhesion rate more than 1.5-fold to that of the basal array support was significantly high (31%) compared with the unrestricted random library (20%). In the restricted third library (50 sequences), the yield of cell-adhesion peptides increased to 84%. We conclude that a repeated cycle of experiments screening limited numbers of peptides can be assisted by the rule-extracting feature of FNN.
NASA Astrophysics Data System (ADS)
Burello, E.; Bologa, C.; Frecer, V.; Miertus, S.
Combinatorial chemistry and technologies have been developed to a stage where synthetic schemes are available for generation of a large variety of organic molecules. The innovative concept of combinatorial design assumes that screening of a large and diverse library of compounds will increase the probability of finding an active analogue among the compounds tested. Since the rate at which libraries are screened for activity currently constitutes a limitation to the use of combinatorial technologies, it is important to be selective about the number of compounds to be synthesized. Early experience with combinatorial chemistry indicated that chemical diversity alone did not result in a significant increase in the number of generated lead compounds. Emphasis has therefore been increasingly put on the use of computer assisted combinatorial chemical techniques. Computational methods are valuable in the design of virtual libraries of molecular models. Selection strategies based on computed physicochemical properties of the models or of a target compound are introduced to reduce the time and costs of library synthesis and screening. In addition, computational structure-based library focusing methods can be used to perform in silico screening of the activity of compounds against a target receptor by docking the ligands into the receptor model. Three case studies are discussed dealing with the design of targeted combinatorial libraries of inhibitors of HIV-1 protease, P. falciparum plasmepsin and human urokinase as potential antivirial, antimalarial and anticancer drugs. These illustrate library focusing strategies.
Use of Computer-Based Reference Services in Texas Information Exchange Libraries.
ERIC Educational Resources Information Center
Menges, Gary L.
The Texas Information Exchange (TIE) is a state-wide library network organized in 1967 for the purpose of sharing resources among Texas libraries. Its membership includes 37 college and university libraries, the Texas State Library, and ten public libraries that serve as Major Resource Centers in the Texas State Library Communications Network. In…
Froenicke, Lutz; Lavelle, Dean; Martineau, Belinda; Perroud, Bertrand; Michelmore, Richard
2013-01-01
Several applications of high throughput genome and transcriptome sequencing would benefit from a reduction of the high-copy-number sequences in the libraries being sequenced and analyzed, particularly when applied to species with large genomes. We adapted and analyzed the consequences of a method that utilizes a thermostable duplex-specific nuclease for reducing the high-copy components in transcriptomic and genomic libraries prior to sequencing. This reduces the time, cost, and computational effort of obtaining informative transcriptomic and genomic sequence data for both fully sequenced and non-sequenced genomes. It also reduces contamination from organellar DNA in preparations of nuclear DNA. Hybridization in the presence of 3 M tetramethylammonium chloride (TMAC), which equalizes the rates of hybridization of GC and AT nucleotide pairs, reduced the bias against sequences with high GC content. Consequences of this method on the reduction of high-copy and enrichment of low-copy sequences are reported for Arabidopsis and lettuce. PMID:23409088
Matvienko, Marta; Kozik, Alexander; Froenicke, Lutz; Lavelle, Dean; Martineau, Belinda; Perroud, Bertrand; Michelmore, Richard
2013-01-01
Several applications of high throughput genome and transcriptome sequencing would benefit from a reduction of the high-copy-number sequences in the libraries being sequenced and analyzed, particularly when applied to species with large genomes. We adapted and analyzed the consequences of a method that utilizes a thermostable duplex-specific nuclease for reducing the high-copy components in transcriptomic and genomic libraries prior to sequencing. This reduces the time, cost, and computational effort of obtaining informative transcriptomic and genomic sequence data for both fully sequenced and non-sequenced genomes. It also reduces contamination from organellar DNA in preparations of nuclear DNA. Hybridization in the presence of 3 M tetramethylammonium chloride (TMAC), which equalizes the rates of hybridization of GC and AT nucleotide pairs, reduced the bias against sequences with high GC content. Consequences of this method on the reduction of high-copy and enrichment of low-copy sequences are reported for Arabidopsis and lettuce.
The WEIZMASS spectral library for high-confidence metabolite identification
NASA Astrophysics Data System (ADS)
Shahaf, Nir; Rogachev, Ilana; Heinig, Uwe; Meir, Sagit; Malitsky, Sergey; Battat, Maor; Wyner, Hilary; Zheng, Shuning; Wehrens, Ron; Aharoni, Asaph
2016-08-01
Annotation of metabolites is an essential, yet problematic, aspect of mass spectrometry (MS)-based metabolomics assays. The current repertoire of definitive annotations of metabolite spectra in public MS databases is limited and suffers from lack of chemical and taxonomic diversity. Furthermore, the heterogeneity of the data prevents the development of universally applicable metabolite annotation tools. Here we present a combined experimental and computational platform to advance this key issue in metabolomics. WEIZMASS is a unique reference metabolite spectral library developed from high-resolution MS data acquired from a structurally diverse set of 3,540 plant metabolites. We also present MatchWeiz, a multi-module strategy using a probabilistic approach to match library and experimental data. This strategy allows efficient and high-confidence identification of dozens of metabolites in model and exotic plants, including metabolites not previously reported in plants or found in few plant species to date.
College and University Libraries.
ERIC Educational Resources Information Center
Shubert, Joseph F., Ed.; Josey, E. J., Ed.
1986-01-01
Following an introductory discussion by E. J. Josey that provides a perspective on college and university libraries, the following essays are presented: (1) "Academic Library Planning--Definitions and Early Planning Studies in Academic Libraries" (Stanton F. Biddle); (2) "Academic Libraries and Academic Computing--Rationale for a…
The Application of Computers to Library Technical Processing
ERIC Educational Resources Information Center
Veaner, Allen B.
1970-01-01
Describes computer applications to acquisitions and technical processing and reports in detail on Stanford's development work in automated technical processing. Author is Assistant Director for Bibliographic Operation, Stanford University Libraries. (JB)
Health sciences library outreach to family caregivers: a call to service.
Howrey, Mary M
2018-04-01
This commentary discusses the information needs of family caregivers and care recipients in the United States. Health sciences library services and outreach activities that support family caregivers include: (1) advocacy, (2) resource building, and (3) programming and education. Ethical issues related to the privacy and confidentiality of clients are outlined in the commentary for information service providers. Also, continuing professional education resources are identified to assist librarians in providing high-quality information services for this special family caregiver population, such as those designed by the National Library of Medicine (NLM) through the NLM 4 Caregivers program.
Health sciences library outreach to family caregivers: a call to service
Howrey, Mary M.
2018-01-01
This commentary discusses the information needs of family caregivers and care recipients in the United States. Health sciences library services and outreach activities that support family caregivers include: (1) advocacy, (2) resource building, and (3) programming and education. Ethical issues related to the privacy and confidentiality of clients are outlined in the commentary for information service providers. Also, continuing professional education resources are identified to assist librarians in providing high-quality information services for this special family caregiver population, such as those designed by the National Library of Medicine (NLM) through the NLM 4 Caregivers program. PMID:29632449
The High Level Data Reduction Library
NASA Astrophysics Data System (ADS)
Ballester, P.; Gabasch, A.; Jung, Y.; Modigliani, A.; Taylor, J.; Coccato, L.; Freudling, W.; Neeser, M.; Marchetti, E.
2015-09-01
The European Southern Observatory (ESO) provides pipelines to reduce data for most of the instruments at its Very Large telescope (VLT). These pipelines are written as part of the development of VLT instruments, and are used both in the ESO's operational environment and by science users who receive VLT data. All the pipelines are highly specific geared toward instruments. However, experience showed that the independently developed pipelines include significant overlap, duplication and slight variations of similar algorithms. In order to reduce the cost of development, verification and maintenance of ESO pipelines, and at the same time improve the scientific quality of pipelines data products, ESO decided to develop a limited set of versatile high-level scientific functions that are to be used in all future pipelines. The routines are provided by the High-level Data Reduction Library (HDRL). To reach this goal, we first compare several candidate algorithms and verify them during a prototype phase using data sets from several instruments. Once the best algorithm and error model have been chosen, we start a design and implementation phase. The coding of HDRL is done in plain C and using the Common Pipeline Library (CPL) functionality. HDRL adopts consistent function naming conventions and a well defined API to minimise future maintenance costs, implements error propagation, uses pixel quality information, employs OpenMP to take advantage of multi-core processors, and is verified with extensive unit and regression tests. This poster describes the status of the project and the lesson learned during the development of reusable code implementing algorithms of high scientific quality.
ERIC Educational Resources Information Center
Heath, Fred M.; Thompson, Bruce; Cook, Colleen; Thompson, Russel L.; Kyrillidou, Martha
2002-01-01
Includes four articles that discuss LibQUAL+[TM], a collaborative effort of the Association of Research Libraries and Texas A&M University responding to the need for greater accountability in measuring the delivery of library services to research library users. Discusses the reliability of LibQUAL+[TM] scores in measuring perceived library…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-27
... quality and the metrics the USPTO should use to measure progress. The roundtables are open to the public... will be held at the Los Angeles Public Library--Central Library, which is located at 630 W. 5th Street... USPTO is conducting two roundtables, one at the Los Angeles Public Library--Central Library facility...
Report on the Total System Computer Program for Medical Libraries.
ERIC Educational Resources Information Center
Divett, Robert T.; Jones, W. Wayne
The objective of this project was to develop an integrated computer program for the total operations of a medical library including acquisitions, cataloging, circulation, reference, a computer catalog, serials controls, and current awareness services. The report describes two systems approaches: the batch system and the terminal system. The batch…
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.
The report of these two hearings on high definition information systems begins by noting that they are digital, and that they are likely to handle computing, telecommunications, home security, computer imaging, storage, fiber optics networks, multi-dimensional libraries, and many other local, national, and international systems. (It is noted that…
ERIC Educational Resources Information Center
Gude, Gilbert; And Others
This set of papers presented to the General Research Libraries Division of the International Federation of Library Associations (IFLA) during its 47th annual conference (1981) includes: "The Effect of the Introduction of Computers on Library and Research Staff," by Gilbert Gude; "Libraries as Information Service Agencies…
ERIC Educational Resources Information Center
Wood, Richard J.
The computer program described is written in BASIC and, although it was developed for use at Slippery Rock State College, it could be adapted easily for other libraries using Library of Congress classification and cataloging rules. The program uses simple sequences of instructions and explanations followed by questions. Branching is employed to…
Mind Transplants Or: The Role of Computer Assisted Instruction in the Future of the Library.
ERIC Educational Resources Information Center
Lyon, Becky J.
Computer assisted instruction (CAI) may well represent the next phase in the involvement of the library or learning resources center with media and the educational process. The Lister Hill Center Experimental CAI Network was established in July, 1972, on the recommendation of the National Library of Medicine, to test the feasibility of sharing CAI…
Systems Analysis, Machineable Circulation Data and Library Users and Non-Users.
ERIC Educational Resources Information Center
Lubans, John, Jr.
A study to be made with computer-based circulation data of the non-use and use of a large academic library is discussed. A search of the literature reveals that computer-based circulation systems can be, but have not been, utilized to provide data bases for systematic analyses of library users and resources. The data gathered in the circulation…
Computers in Libraries, 2000: Proceedings (15th, Washington, D.C., March 15-17, 2000).
ERIC Educational Resources Information Center
Nixon, Carol, Comp.; Burmood, Jennifer, Comp.
Topics of the Proceedings of the 15th Annual Computers in Libraries Conference (March 15-17, 2000) include: Linux and open source software in an academic library; a Master Trainer Program; what educators need to know about multimedia and copyright; how super searchers find business information online; managing print costs; new technologies in wide…
CTserver: A Computational Thermodynamics Server for the Geoscience Community
NASA Astrophysics Data System (ADS)
Kress, V. C.; Ghiorso, M. S.
2006-12-01
The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed architecture involves CFD computation of magma convection at Volcan Villarrica with magma properties and phase proportions calculated at each spatial node and at each time step via distributed function calls to MELTS-objects executing on the CTserver. Documentation and programming examples are provided at http://ctserver.ofm- research.org.
ERIC Educational Resources Information Center
Library Computing, 1985
1985-01-01
Special supplement to "Library Journal" and "School Library Journal" covers topics of interest to school, public, academic, and special libraries planning for automation: microcomputer use, readings in automation, online searching, databases of microcomputer software, public access to microcomputers, circulation, creating a…
Riding a tsunami in ocean science education
NASA Astrophysics Data System (ADS)
Reed, Donald L.
1998-08-01
An experiment began in late 1994 in which the WWW plays a critical role in the instruction of students in an oceanography course for non-majors. The format of the course consists of an equal blend of traditional lectures, tutorial-style exercises delivered from the course WWW site, classroom activities, such as poster presentations and group projects, and field excursions to local marine environments. The driving force behind the technology component of the course is to provide high-quality educational materials that can be accessed at the convenience of the student. These materials include course information and handouts, lecture notes, self-paced exercises, a virtual library of electronic resources, information on newsworthy marine events, and late-breaking oceanographic research that impacts the population of California. The course format was designed to partially meet the demands of today's students, involve students in the learning process, and prepare students for using technology in work following graduation. Students have reacted favorably to the use of the WWW and comments by peers have been equally supportive. Students are more focused in their efforts during the computer-based exercises than while listening to lecture presentations. The implementation of this form of learning, however, has not, as yet, reduced the financial cost of the course or the amount of instructor effort in providing a high quality education. Interactions between the instructor and students have increased significantly as the informality of a computer laboratory promotes individual discussions and electronic communication provides students with easy (and frequent) access to the instructor outside of class.
ERIC Educational Resources Information Center
Havas, George D.
This brief guide to materials in the Library of Congress (LC) on computer aided design and/or computer aided manufacturing lists reference materials and other information sources under 13 headings: (1) brief introductions; (2) LC subject headings used for such materials; (3) textbooks; (4) additional titles; (5) glossaries and handbooks; (6)…
Salvat, Regina S; Verma, Deeptak; Parker, Andrew S; Kirsch, Jack R; Brooks, Seth A; Bailey-Kellogg, Chris; Griswold, Karl E
2017-06-27
Therapeutic proteins of wide-ranging function hold great promise for treating disease, but immune surveillance of these macromolecules can drive an antidrug immune response that compromises efficacy and even undermines safety. To eliminate widespread T-cell epitopes in any biotherapeutic and thereby mitigate this key source of detrimental immune recognition, we developed a Pareto optimal deimmunization library design algorithm that optimizes protein libraries to account for the simultaneous effects of combinations of mutations on both molecular function and epitope content. Active variants identified by high-throughput screening are thus inherently likely to be deimmunized. Functional screening of an optimized 10-site library (1,536 variants) of P99 β-lactamase (P99βL), a component of ADEPT cancer therapies, revealed that the population possessed high overall fitness, and comprehensive analysis of peptide-MHC II immunoreactivity showed the population possessed lower average immunogenic potential than the wild-type enzyme. Although similar functional screening of an optimized 30-site library (2.15 × 10 9 variants) revealed reduced population-wide fitness, numerous individual variants were found to have activity and stability better than the wild type despite bearing 13 or more deimmunizing mutations per enzyme. The immunogenic potential of one highly active and stable 14-mutation variant was assessed further using ex vivo cellular immunoassays, and the variant was found to silence T-cell activation in seven of the eight blood donors who responded strongly to wild-type P99βL. In summary, our multiobjective library-design process readily identified large and mutually compatible sets of epitope-deleting mutations and produced highly active but aggressively deimmunized constructs in only one round of library screening.
NCSTRL: Design and Deployment of a Globally Distributed Digital Library.
ERIC Educational Resources Information Center
Davies, James R.; Lagoze, Carl
2000-01-01
Discusses the development of a digital library architecture that allows the creation of digital libraries within the World Wide Web. Describes a digital library, NCSTRL (Networked Computer Science Technical Research Library), within which the work has taken place and explains Dienst, a protocol and architecture for distributed digital libraries.…
Getting a Piece of the Pie: R&D at the Apple Library.
ERIC Educational Resources Information Center
Ertel, Monica
1990-01-01
The Apple Library (the library at Apple Computer, Inc.) currently reports to the research and development arm of the company, a relationship that has been mutually advantageous. The library has been involved in research through a library users group, a grant program, and a laboratory within the library. (MES)
ERIC Educational Resources Information Center
International Federation of Library Associations, The Hague (Netherlands).
Papers on university and other research libraries, presented at the 1983 International Federation of Library Associations (IFLA) conference, include: (1) "The Impact of Technology on Users of Academic and Research Libraries," in which C. Lee Jones (United States) focuses on the impact of technical advances in computing and…
Automation, Resource Sharing, and the Small Academic Library.
ERIC Educational Resources Information Center
Miller, Arthur H., Jr.
1983-01-01
Discussion of Illinois experiences in library cooperation and computerization (OCLC, Library Computer System, LIBRAS) describes use of library materials, benefits and drawbacks of online networking, experiences at Lake Forest College (Illinois), and six tasks recommended for small academic libraries as preparation for major changes toward…
Next-generation sequencing library construction on a surface.
Feng, Kuan; Costa, Justin; Edwards, Jeremy S
2018-05-30
Next-generation sequencing (NGS) has revolutionized almost all fields of biology, agriculture and medicine, and is widely utilized to analyse genetic variation. Over the past decade, the NGS pipeline has been steadily improved, and the entire process is currently relatively straightforward. However, NGS instrumentation still requires upfront library preparation, which can be a laborious process, requiring significant hands-on time. Herein, we present a simple but robust approach to streamline library preparation by utilizing surface bound transposases to construct DNA libraries directly on a flowcell surface. The surface bound transposases directly fragment genomic DNA while simultaneously attaching the library molecules to the flowcell. We sequenced and analysed a Drosophila genome library generated by this surface tagmentation approach, and we showed that our surface bound library quality was comparable to the quality of the library from a commercial kit. In addition to the time and cost savings, our approach does not require PCR amplification of the library, which eliminates potential problems associated with PCR duplicates. We described the first study to construct libraries directly on a flowcell. We believe our technique could be incorporated into the existing Illumina sequencing pipeline to simplify the workflow, reduce costs, and improve data quality.
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2016-12-01
There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.
Benchmarking comparison and validation of MCNP photon interaction data
NASA Astrophysics Data System (ADS)
Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.
2017-09-01
The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.
BIBLIOGRAPHIES, HIGH SCHOOL MATHEMATICS.
ERIC Educational Resources Information Center
WOODS, PAUL E.
THIS ANNOTATED BIBLIOGRAPHY IS A COMPILATION OF A NUMBER OF HIGHLY REGARDED BOOK LISTS CONSISTING OF LIBRARY BOOKS AND TEXTBOOKS FOR GRADES 7-12. THE BOOKS IN THIS LIST ARE CURRENTLY IN PRINT AND THE CONTENT IS REPRESENTATIVE OF THE FOLLOWING AREAS OF MATHEMATICS--MATHEMATICAL RECREATION, COMPUTERS, ARITHMETIC, ALGEBRA, EUCLIDEAN GEOMETRY,…
Fermilab | Science at Fermilab | Experiments & Projects | Cosmic Frontier
Proposed Projects and Experiments Fermilab's Tevatron Questions for the Universe Theory Computing High Answers Submit a Question Frontiers of Particle Physics Benefits to Society Benefits to Society Medicine Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library
Library design practices for success in lead generation with small molecule libraries.
Goodnow, R A; Guba, W; Haap, W
2003-11-01
The generation of novel structures amenable to rapid and efficient lead optimization comprises an emerging strategy for success in modern drug discovery. Small molecule libraries of sufficient size and diversity to increase the chances of discovery of novel structures make the high throughput synthesis approach the method of choice for lead generation. Despite an industry trend for smaller, more focused libraries, the need to generate novel lead structures makes larger libraries a necessary strategy. For libraries of a several thousand or more members, solid phase synthesis approaches are the most suitable. While the technology and chemistry necessary for small molecule library synthesis continue to advance, success in lead generation requires rigorous consideration in the library design process to ensure the synthesis of molecules possessing the proper characteristics for subsequent lead optimization. Without proper selection of library templates and building blocks, solid phase synthesis methods often generate molecules which are too heavy, too lipophilic and too complex to be useful for lead optimization. The appropriate filtering of virtual library designs with multiple computational tools allows the generation of information-rich libraries within a drug-like molecular property space. An understanding of the hit-to-lead process provides a practical guide to molecular design characteristics. Examples of leads generated from library approaches also provide a benchmarking of successes as well as aspects for continued development of library design practices.
High-performance computing on GPUs for resistivity logging of oil and gas wells
NASA Astrophysics Data System (ADS)
Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.
2017-10-01
We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.
Technical Note: spektr 3.0—A computational tool for x-ray spectrum modeling and analysis
Punnoose, J.; Xu, J.; Sisniega, A.; Zbijewski, W.; Siewerdsen, J. H.
2016-01-01
Purpose: A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The spektr code generates x-ray spectra (photons/mm2/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available. PMID:27487888
Technical Note: SPEKTR 3.0—A computational tool for x-ray spectrum modeling and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punnoose, J.; Xu, J.; Sisniega, A.
2016-08-15
Purpose: A computational toolkit (SPEKTR 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a MATLAB (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The SPEKTR code generates x-ray spectra (photons/mm{sup 2}/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins overmore » beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, SPEKTR, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the SPEKTR function library, UI, and optimization tool are available.« less
Evolutionary concepts in biobanking - the BC BioLibrary
2009-01-01
Background Medical research to improve health care faces a major problem in the relatively limited availability of adequately annotated and collected biospecimens. This limitation is creating a growing gap between the pace of scientific advances and successful exploitation of this knowledge. Biobanks are an important conduit for transfer of biospecimens (tissues, blood, body fluids) and related health data to research. They have evolved outside of the historical source of tissue biospecimens, clinical pathology archives. Research biobanks have developed advanced standards, protocols, databases, and mechanisms to interface with researchers seeking biospecimens. However, biobanks are often limited in their capacity and ability to ensure quality in the face of increasing demand. Our strategy to enhance both capacity and quality in research biobanking is to create a new framework that repatriates the activity of biospecimen accrual for biobanks to clinical pathology. Methods The British Columbia (BC) BioLibrary is a framework to maximize the accrual of high-quality, annotated biospecimens into biobanks. The BC BioLibrary design primarily encompasses: 1) specialized biospecimen collection units embedded within clinical pathology and linked to a biospecimen distribution system that serves biobanks; 2) a systematic process to connect potential donors with biobanks, and to connect biobanks with consented biospecimens; and 3) interdisciplinary governance and oversight informed by public opinion. Results The BC BioLibrary has been embraced by biobanking leaders and translational researchers throughout BC, across multiple health authorities, institutions, and disciplines. An initial pilot network of three Biospecimen Collection Units has been successfully established. In addition, two public deliberation events have been held to obtain input from the public on the BioLibrary and on issues including consent, collection of biospecimens and governance. Conclusion The BC BioLibrary framework addresses common issues for clinical pathology, biobanking, and translational research across multiple institutions and clinical and research domains. We anticipate that our framework will lead to enhanced biospecimen accrual capacity and quality, reduced competition between biobanks, and a transparent process for donors that enhances public trust in biobanking. PMID:19909513
Students' Perceived Quality of Library Facilities and Services in Nigerian Private Universities
ERIC Educational Resources Information Center
Oluwunmi, A. O.; Durodola, O. D.; Ajayi, C. A.
2016-01-01
In a highly competitive academic environment, students are becoming more selective and demanding in their choice of University. Hence, it is essential for educational institutions, particularly privately-owned institutions, to be interested in getting feedback on the quality of their facilities and services. With a focus on four private…
Positioning Open Access Journals in a LIS Journal Ranking
ERIC Educational Resources Information Center
Xia, Jingfeng
2012-01-01
This research uses the h-index to rank the quality of library and information science journals between 2004 and 2008. Selected open access (OA) journals are included in the ranking to assess current OA development in support of scholarly communication. It is found that OA journals have gained momentum supporting high-quality research and…
Neilson, Christine J
2010-01-01
The Saskatchewan Health Information Resources Partnership (SHIRP) provides library instruction to Saskatchewan's health care practitioners and students on placement in health care facilities as part of its mission to provide province-wide access to evidence-based health library resources. A portable computer lab was assembled in 2007 to provide hands-on training in rural health facilities that do not have computer labs of their own. Aside from some minor inconveniences, the introduction and operation of the portable lab has gone smoothly. The lab has been well received by SHIRP patrons and continues to be an essential part of SHIRP outreach.
Designing Tracking Software for Image-Guided Surgery Applications: IGSTK Experience
Enquobahrie, Andinet; Gobbi, David; Turek, Matt; Cheng, Patrick; Yaniv, Ziv; Lindseth, Frank; Cleary, Kevin
2009-01-01
Objective Many image-guided surgery applications require tracking devices as part of their core functionality. The Image-Guided Surgery Toolkit (IGSTK) was designed and developed to interface tracking devices with software applications incorporating medical images. Methods IGSTK was designed as an open source C++ library that provides the basic components needed for fast prototyping and development of image-guided surgery applications. This library follows a component-based architecture with several components designed for specific sets of image-guided surgery functions. At the core of the toolkit is the tracker component that handles communication between a control computer and navigation device to gather pose measurements of surgical instruments present in the surgical scene. The representations of the tracked instruments are superimposed on anatomical images to provide visual feedback to the clinician during surgical procedures. Results The initial version of the IGSTK toolkit has been released in the public domain and several trackers are supported. The toolkit and related information are available at www.igstk.org. Conclusion With the increased popularity of minimally invasive procedures in health care, several tracking devices have been developed for medical applications. Designing and implementing high-quality and safe software to handle these different types of trackers in a common framework is a challenging task. It requires establishing key software design principles that emphasize abstraction, extensibility, reusability, fault-tolerance, and portability. IGSTK is an open source library that satisfies these needs for the image-guided surgery community. PMID:20037671
NASA Astrophysics Data System (ADS)
López, J.; Hernández, J.; Gómez, P.; Faura, F.
2018-02-01
The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.
Towards a high performance geometry library for particle-detector simulations
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...
2015-05-22
Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less
Towards a high performance geometry library for particle-detector simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.
Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less
Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K
2014-09-04
In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.
Chin, Jefferson; Wood, Elizabeth; Peters, Grace S; Drexler, Dieter M
2016-02-01
In the early stages of drug discovery, high-throughput screening (HTS) of compound libraries against pharmaceutical targets is a common method to identify potential lead molecules. For these HTS campaigns to be efficient and successful, continuous quality control of the compound collection is necessary and crucial. However, the large number of compound samples and the limited sample amount pose unique challenges. Presented here is a proof-of-concept study for a novel process flow for the quality control screening of small-molecule compound libraries that consumes only minimal amounts of samples and affords compound-specific molecular data. This process employs an acoustic sample deposition (ASD) technique for the offline sample preparation by depositing nanoliter volumes in an array format onto microscope glass slides followed by matrix-assisted laser desorption/ionization mass spectrometric (MALDI-MS) analysis. An initial study of a 384-compound array employing the ASD-MALDI-MS workflow resulted in a 75% first-pass positive identification rate with an analysis time of <1 s per sample. © 2015 Society for Laboratory Automation and Screening.
Nasajpour, Mohammad Reza; Ashrafi-Rizi, Hasan; Soleymani, Mohammad Reza; Shahrzadi, Leila; Hassanzadeh, Akbar
2014-01-01
Today, the websites of college and university libraries play an important role in providing the necessary services for clients. These websites not only allow the users to access different collections of library resources, but also provide them with the necessary guidance in order to use the information. The goal of this study is the quality evaluation of the college library websites in Iranian Medical Universities based on the Stover model. This study uses an analytical survey method and is an applied study. The data gathering tool is the standard checklist provided by Stover, which was modified by the researchers for this study. The statistical population is the college library websites of the Iranian Medical Universities (146 websites) and census method was used for investigation. The data gathering method was a direct access to each website and filling of the checklist was based on the researchers' observations. Descriptive and analytical statistics (Analysis of Variance (ANOVA)) were used for data analysis with the help of the SPSS software. The findings showed that in the dimension of the quality of contents, the highest average belonged to type one universities (46.2%) and the lowest average belonged to type three universities (24.8%). In the search and research capabilities, the highest average belonged to type one universities (48.2%) and the lowest average belonged to type three universities. In the dimension of facilities provided for the users, type one universities again had the highest average (37.2%), while type three universities had the lowest average (15%). In general the library websites of type one universities had the highest quality (44.2%), while type three universities had the lowest quality (21.1%). Also the library websites of the College of Rehabilitation and the College of Paramedics, of the Shiraz University of Medical Science, had the highest quality scores. The results showed that there was a meaningful difference between the quality of the college library websites and the university types, resulting in college libraries of type one universities having the highest average score and the college libraries of type three universities having the lowest score.
Nasajpour, Mohammad Reza; Ashrafi-rizi, Hasan; Soleymani, Mohammad Reza; Shahrzadi, Leila; Hassanzadeh, Akbar
2014-01-01
Introduction: Today, the websites of college and university libraries play an important role in providing the necessary services for clients. These websites not only allow the users to access different collections of library resources, but also provide them with the necessary guidance in order to use the information. The goal of this study is the quality evaluation of the college library websites in Iranian Medical Universities based on the Stover model. Material and Methods: This study uses an analytical survey method and is an applied study. The data gathering tool is the standard checklist provided by Stover, which was modified by the researchers for this study. The statistical population is the college library websites of the Iranian Medical Universities (146 websites) and census method was used for investigation. The data gathering method was a direct access to each website and filling of the checklist was based on the researchers’ observations. Descriptive and analytical statistics (Analysis of Variance (ANOVA)) were used for data analysis with the help of the SPSS software. Findings: The findings showed that in the dimension of the quality of contents, the highest average belonged to type one universities (46.2%) and the lowest average belonged to type three universities (24.8%). In the search and research capabilities, the highest average belonged to type one universities (48.2%) and the lowest average belonged to type three universities. In the dimension of facilities provided for the users, type one universities again had the highest average (37.2%), while type three universities had the lowest average (15%). In general the library websites of type one universities had the highest quality (44.2%), while type three universities had the lowest quality (21.1%). Also the library websites of the College of Rehabilitation and the College of Paramedics, of the Shiraz University of Medical Science, had the highest quality scores. Discussion: The results showed that there was a meaningful difference between the quality of the college library websites and the university types, resulting in college libraries of type one universities having the highest average score and the college libraries of type three universities having the lowest score. PMID:25540794
Think Quality! The Deming Approach Does Work in Libraries.
ERIC Educational Resources Information Center
Mackey, Terry; Mackey, Kitty
1992-01-01
Presents W. Edwards Deming's Total Quality Management method and advocates its adoption in libraries. The 14 points that form the basis of Deming's philosophy are discussed in the context of the library setting. A flow chart of the reference process and user survey questions are included. (MES)
Library Users' Service Desires: A LibQUAL+ Study
ERIC Educational Resources Information Center
Thompson, Bruce; Kyrillidou, Martha; Cook, Colleen
2008-01-01
The present study was conducted to explore library users' desired service quality levels on the twenty-two core LibQUAL+ items. Specifically, we explored similarities and differences in users' desired library service quality levels across user groups (i.e., undergraduate students, graduate students, and faculty), across geographic locations (i.e.,…
Service Quality: A Concept not Fully Explored.
ERIC Educational Resources Information Center
Hernon, Peter; Nitecki, Danuta A.
2001-01-01
Examines the concept of service quality in libraries. Highlights include assessment; service quality versus user satisfaction; measuring service quality, including SERVQUAL; planning; experiences at Texas A& M University in cooperation with ARL (Association of Research Libraries) that resulted in LibQUAL+; and conceptual issues. (Contains 54…
Portable classroom leads to partnership.
Le Ber, Jeanne Marie; Lombardo, Nancy T; Weber, Alice; Bramble, John
2004-01-01
Library faculty participation on the School of Medicine Curriculum Steering Committee led to a unique opportunity to partner technology and teaching utilizing the library's portable wireless classroom. The pathology lab course master expressed a desire to revise the curriculum using patient cases and direct access to the Web and library resources. Since the pathology lab lacked computers, the library's portable wireless classroom provided a solution. Originally developed to provide maximum portability and flexibility, the wireless classroom consists of ten laptop computers configured with wireless cards and an access point. While the portable wireless classroom led to a partnership with the School of Medicine, there were additional benefits and positive consequences for the library.
Flynn, Allen J; Bahulekar, Namita; Boisvert, Peter; Lagoze, Carl; Meng, George; Rampton, James; Friedman, Charles P
2017-01-01
Throughout the world, biomedical knowledge is routinely generated and shared through primary and secondary scientific publications. However, there is too much latency between publication of knowledge and its routine use in practice. To address this latency, what is actionable in scientific publications can be encoded to make it computable. We have created a purpose-built digital library platform to hold, manage, and share actionable, computable knowledge for health called the Knowledge Grid Library. Here we present it with its system architecture.
ERIC Educational Resources Information Center
Choudhury, Sayeed; Hobbs, Benjamin; Lorie, Mark; Flores, Nicholas; Coleman, Anita; Martin, Mairead; Kuhlman, David L.; McNair, John H.; Rhodes, William A.; Tipton, Ron; Agnew, Grace; Nicholson, Dennis; Macgregor, George
2002-01-01
Includes four articles that address issues related to digital libraries. Highlights include a framework for evaluating digital library services, particularly academic research libraries; interdisciplinary approaches to education about digital libraries that includes library and information science and computing; digital rights management; and the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Eric M.
2004-05-20
The YAP software library computes (1) electromagnetic modes, (2) electrostatic fields, (3) magnetostatic fields and (4) particle trajectories in 2d and 3d models. The code employs finite element methods on unstructured grids of tetrahedral, hexahedral, prism and pyramid elements, with linear through cubic element shapes and basis functions to provide high accuracy. The novel particle tracker is robust, accurate and efficient, even on unstructured grids with discontinuous fields. This software library is a component of the MICHELLE 3d finite element gun code.
Study of college library appealing information system: A case of Longyan University
NASA Astrophysics Data System (ADS)
Liao, Jin-Hui
2014-10-01
The complaints from the readers at university libraries mainly focus on the aspects of service attitude, quality of service, reading environment, the management system, etc. Librarians should realize that reader complaints can actually promote the role of the library service and communicate with readers who complain in a friendly manner. In addition, the Longyan University library should establish an internal management system, improve library hardware facilities, improve the quality of librarians and optimize the knowledge structure of librarians, so as to improve the quality of the service for readers and reduce complaints. Based on this point, we have designed an appealing information system in cryptography machine basis, to provide readers online, remote and anonymous complaint functions.
Sorensen, Mads Solvsten; Mosegaard, Jesper; Trier, Peter
2009-06-01
Existing virtual simulators for middle ear surgery are based on 3-dimensional (3D) models from computed tomographic or magnetic resonance imaging data in which image quality is limited by the lack of detail (maximum, approximately 50 voxels/mm3), natural color, and texture of the source material.Virtual training often requires the purchase of a program, a customized computer, and expensive peripherals dedicated exclusively to this purpose. The Visible Ear freeware library of digital images from a fresh-frozen human temporal bone was segmented, and real-time volume rendered as a 3D model of high-fidelity, true color, and great anatomic detail and realism of the surgically relevant structures. A haptic drilling model was developed for surgical interaction with the 3D model. Realistic visualization in high-fidelity (approximately 125 voxels/mm3) and true color, 2D, or optional anaglyph stereoscopic 3D was achieved on a standard Core 2 Duo personal computer with a GeForce 8,800 GTX graphics card, and surgical interaction was provided through a relatively inexpensive (approximately $2,500) Phantom Omni haptic 3D pointing device. This prototype is published for download (approximately 120 MB) as freeware at http://www.alexandra.dk/ves/index.htm.With increasing personal computer performance, future versions may include enhanced resolution (up to 8,000 voxels/mm3) and realistic interaction with deformable soft tissue components such as skin, tympanic membrane, dura, and cholesteatomas-features some of which are not possible with computed tomographic-/magnetic resonance imaging-based systems.
The Event Detection and the Apparent Velocity Estimation Based on Computer Vision
NASA Astrophysics Data System (ADS)
Shimojo, M.
2012-08-01
The high spatial and time resolution data obtained by the telescopes aboard Hinode revealed the new interesting dynamics in solar atmosphere. In order to detect such events and estimate the velocity of dynamics automatically, we examined the estimation methods of the optical flow based on the OpenCV that is the computer vision library. We applied the methods to the prominence eruption observed by NoRH, and the polar X-ray jet observed by XRT. As a result, it is clear that the methods work well for solar images if the images are optimized for the methods. It indicates that the optical flow estimation methods in the OpenCV library are very useful to analyze the solar phenomena.
The Advance of Computing from the Ground to the Cloud
ERIC Educational Resources Information Center
Breeding, Marshall
2009-01-01
A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…
ERIC Educational Resources Information Center
Dillon, Martin; And Others
The Online Computer Library Center Internet Resource project focused on the nature of electronic textual information available through remote access using the Internet and the problems associated with creating machine-readable cataloging (MARC) records for these objects using current USMARC format for computer files and "Anglo-American…
Technical Considerations for Reduced Representation Bisulfite Sequencing with Multiplexed Libraries
Chatterjee, Aniruddha; Rodger, Euan J.; Stockwell, Peter A.; Weeks, Robert J.; Morison, Ian M.
2012-01-01
Reduced representation bisulfite sequencing (RRBS), which couples bisulfite conversion and next generation sequencing, is an innovative method that specifically enriches genomic regions with a high density of potential methylation sites and enables investigation of DNA methylation at single-nucleotide resolution. Recent advances in the Illumina DNA sample preparation protocol and sequencing technology have vastly improved sequencing throughput capacity. Although the new Illumina technology is now widely used, the unique challenges associated with multiplexed RRBS libraries on this platform have not been previously described. We have made modifications to the RRBS library preparation protocol to sequence multiplexed libraries on a single flow cell lane of the Illumina HiSeq 2000. Furthermore, our analysis incorporates a bioinformatics pipeline specifically designed to process bisulfite-converted sequencing reads and evaluate the output and quality of the sequencing data generated from the multiplexed libraries. We obtained an average of 42 million paired-end reads per sample for each flow-cell lane, with a high unique mapping efficiency to the reference human genome. Here we provide a roadmap of modifications, strategies, and trouble shooting approaches we implemented to optimize sequencing of multiplexed libraries on an a RRBS background. PMID:23193365
Computation, Mathematics and Logistics Department Report for Fiscal Year 1978.
1980-03-01
storage technology. A reference library on these and related areas is now composed of two thousand documents. The most comprehensive tool available...at DTNSRDC on the CDC 6000 Computer System for a variety of applications including Navy Logistics, Library Science, Ocean Science, Contract Manage... Library Science) Track technical documents on advanced ship design Univ. of Virginia at Charlottesville - (Ocean Science) Monitor research projects for
How to Communicate with a Machine: On Reading a Public Library's OPAC
ERIC Educational Resources Information Center
Saarti, Jarmo; Raivio, Jouko
2011-01-01
This article presents a reading of the user interface in one public library system. Its aim is to find out the frames and competences required and used in the communication between the computer and the patron. The authors see the computer as a text that is to be read by the user who wants to search for information from the library. The transition…
A technical assessment of the porcine ejaculated spermatozoa for a sperm-specific RNA-seq analysis.
Gòdia, Marta; Mayer, Fabiana Quoos; Nafissi, Julieta; Castelló, Anna; Rodríguez-Gil, Joan Enric; Sánchez, Armand; Clop, Alex
2018-04-26
The study of the boar sperm transcriptome by RNA-seq can provide relevant information on sperm quality and fertility and might contribute to animal breeding strategies. However, the analysis of the spermatozoa RNA is challenging as these cells harbor very low amounts of highly fragmented RNA, and the ejaculates also contain other cell types with larger amounts of non-fragmented RNA. Here, we describe a strategy for a successful boar sperm purification, RNA extraction and RNA-seq library preparation. Using these approaches our objectives were: (i) to evaluate the sperm recovery rate (SRR) after boar spermatozoa purification by density centrifugation using the non-porcine-specific commercial reagent BoviPure TM ; (ii) to assess the correlation between SRR and sperm quality characteristics; (iii) to evaluate the relationship between sperm cell RNA load and sperm quality traits and (iv) to compare different library preparation kits for both total RNA-seq (SMARTer Universal Low Input RNA and TruSeq RNA Library Prep kit) and small RNA-seq (NEBNext Small RNA and TailorMix miRNA Sample Prep v2) for high-throughput sequencing. Our results show that pig SRR (~22%) is lower than in other mammalian species and that it is not significantly dependent of the sperm quality parameters analyzed in our study. Moreover, no relationship between the RNA yield per sperm cell and sperm phenotypes was found. We compared a RNA-seq library preparation kit optimized for low amounts of fragmented RNA with a standard kit designed for high amount and quality of input RNA and found that for sperm, a protocol designed to work on low-quality RNA is essential. We also compared two small RNA-seq kits and did not find substantial differences in their performance. We propose the methodological workflow described for the RNA-seq screening of the boar spermatozoa transcriptome. FPKM: fragments per kilobase of transcript per million mapped reads; KRT1: keratin 1; miRNA: micro-RNA; miscRNA: miscellaneous RNA; Mt rRNA: mitochondrial ribosomal RNA; Mt tRNA: mitochondrial transference RNA; OAZ3: ornithine decarboxylase antizyme 3; ORT: osmotic resistance test; piRNA: Piwi-interacting RNA; PRM1: protamine 1; PTPRC: protein tyrosine phosphatase receptor type C; rRNA: ribosomal RNA; snoRNA: small nucleolar RNA; snRNA: small nuclear RNA; SRR: sperm recovery rate; tRNA: transfer RNA.
ERIC Educational Resources Information Center
Manjarrez, Carlos A.; Schoembs, Kyle
2011-01-01
Over the past decade, policy discussions about public access computing in libraries have focused on the role that these institutions play in bridging the digital divide. In these discussions, public access computing services are generally targeted at individuals who either cannot afford a computer and Internet access, or have never received formal…
Library Theory and Research Section. Education and Research Division. Papers.
ERIC Educational Resources Information Center
International Federation of Library Associations, The Hague (Netherlands).
Papers on library/information science theory and research, which were presented at the 1983 International Federation of Library Associations (IFLA) conference, include: (1) "The Role of the Library in Computer-Aided Information and Documentation Systems," in which Wolf D. Rauch (West Germany) asserts that libraries must adapt to the…
Libraries Online!: Microsoft Partnering with American Library Association (ALA).
ERIC Educational Resources Information Center
Machovec, George S., Ed.
1995-01-01
Describes Libraries Online, a pilot project created by Microsoft and the American Library Association to develop ways to provide access to information technologies to underserved populations. Presents the nine public libraries that will receive cash grants, staff training, computer hardware and software, and technical support to help support local…
Electronic Journals in Academic Libraries: A Comparison of ARL and Non-ARL Libraries.
ERIC Educational Resources Information Center
Shemberg, Marian; Grossman, Cheryl
1999-01-01
Describes a survey dealing with academic library provision of electronic journals and other electronic resources that compared ARL (Association of Research Libraries) members to non-ARL members. Highlights include full-text electronic journals; computers in libraries; online public access catalogs; interlibrary loan and electronic reserves; access…
The National Library of Medicine Programs and Services, Fiscal Year 1974.
ERIC Educational Resources Information Center
National Library of Medicine (DHEW), Bethesda, MD.
The activities and projects of the National Library of Medicine are described. New and continuing programs in library services and operations, on-line computer retrieval services, grants for library assistance, audiovisual programs, and health communications research are included. International activities of the Library are outlined. Summary…
Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S
2013-01-08
OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.
Eastman, Peter; Friedrichs, Mark S.; Chodera, John D.; Radmer, Randall J.; Bruns, Christopher M.; Ku, Joy P.; Beauchamp, Kyle A.; Lane, Thomas J.; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R.; Pande, Vijay S.
2012-01-01
OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added. PMID:23316124
NASA Astrophysics Data System (ADS)
Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Carrete, Jesús; Toher, Cormac; de Jong, Maarten; Asta, Mark; Fornari, Marco; Nardelli, Marco Buongiorno; Curtarolo, Stefano
2017-10-01
One of the most accurate approaches for calculating lattice thermal conductivity, , is solving the Boltzmann transport equation starting from third-order anharmonic force constants. In addition to the underlying approximations of ab-initio parameterization, two main challenges are associated with this path: high computational costs and lack of automation in the frameworks using this methodology, which affect the discovery rate of novel materials with ad-hoc properties. Here, the Automatic Anharmonic Phonon Library (AAPL) is presented. It efficiently computes interatomic force constants by making effective use of crystal symmetry analysis, it solves the Boltzmann transport equation to obtain , and allows a fully integrated operation with minimum user intervention, a rational addition to the current high-throughput accelerated materials development framework AFLOW. An "experiment vs. theory" study of the approach is shown, comparing accuracy and speed with respect to other available packages, and for materials characterized by strong electron localization and correlation. Combining AAPL with the pseudo-hybrid functional ACBN0 is possible to improve accuracy without increasing computational requirements.
Academic Information Services: A Library Management Perspective.
ERIC Educational Resources Information Center
Allen, Bryce
1995-01-01
Using networked information resources to communicate research results has great potential for academic libraries; this development will require collaboration among libraries, scholars, computing centers, and university presses. Library managers can help overcome collaboration barriers by developing appropriate organizational structures, selecting…
Computers and Libraries-A Reply
ERIC Educational Resources Information Center
Salton, Gerard
1971-01-01
A consideration of the various possibilities which are available for improving library operations and collection control makes it plain that the response of the library community ought to be towards greater cooperation among library centers and more extensive standardization. (31 references) (Author)
Service Quality and Customer Satisfaction: An Assessment and Future Directions.
ERIC Educational Resources Information Center
Hernon, Peter; Nitecki, Danuta A.; Altman, Ellen
1999-01-01
Reviews the literature of library and information science to examine issues related to service quality and customer satisfaction in academic libraries. Discusses assessment, the application of a business model to higher education, a multiple constituency approach, decision areas regarding service quality, resistance to service quality, and future…
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
ATHENA, ARTEMIS, HEPHAESTUS: data analysis for X-ray absorption spectropscopy using IFEFFIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravel, B.; Newville, M.; UC)
2010-07-20
A software package for the analysis of X-ray absorption spectroscopy (XAS) data is presented. This package is based on the IFEFFIT library of numerical and XAS algorithms and is written in the Perl programming language using the Perl/Tk graphics toolkit. The programs described here are: (i) ATHENA, a program for XAS data processing, (ii) ARTEMIS, a program for EXAFS data analysis using theoretical standards from FEFF and (iii) HEPHAESTUS, a collection of beamline utilities based on tables of atomic absorption data. These programs enable high-quality data analysis that is accessible to novices while still powerful enough to meet the demandsmore » of an expert practitioner. The programs run on all major computer platforms and are freely available under the terms of a free software license.« less
Quality Management and Self Assessment Tools for Public Libraries.
ERIC Educational Resources Information Center
Evans, Margaret Kinnell
This paper describes a two-year study by the British Library Research and Innovation Centre that examined the potential of self-assessment for public library services. The approaches that formed the basis for the investigation were the Business Excellence Model, the Quality Framework, and the Democratic Approach. Core values were identified by…
University Rankings: How Well Do They Measure Library Service Quality?
ERIC Educational Resources Information Center
Jackson, Brian
2015-01-01
University rankings play an increasingly large role in shaping the goals of academic institutions and departments, while removing universities themselves from the evaluation process. This study compares the library-related results of two university ranking publications with scores on the LibQUAL+™ survey to identify if library service quality--as…
ERIC Educational Resources Information Center
Cook, Colleen; Thompson, Bruce
2001-01-01
Investigated the psychometric integrity of scores from the LibQUAL+ evaluation of perceived library service quality conducted by ARL (Association of Research Libraries). Examines score structure, score reliability, score correlation and concurrent validity coefficients, scale means, and scale standardized norms, and considers the potential of the…
LISTENing to healthcare students: the impact of new library facilities on the quality of services.
Haldane, Graham C
2003-06-01
Following a low assessment of 'Learning resources' provision by the Quality Assurance Agency, the librarian of Homerton College, School of Health Studies commenced the LISTEN Project, a long-term study to monitor the effects of planned interventions on the quality of library provision. Surveys of entry-to-register student nurses & midwives were conducted in 1999 and 2001 by extensive questionnaires, inviting Likert-scaled and free text responses. Following a college relocation, students made greater than expected use of a new health studies library in Cambridge, and significantly less use of the local teaching hospital library. Using both a satisfaction index and a non-parametric test of mean scores, student evaluation of library services in Cambridge significantly improved following relocation. The physical accommodation and location of library services remain important to healthcare students. Identifiable improvements to the quality of services, however, will overcome initial resistance to change. Education providers must ensure the best mix of physical and electronic services for students who spend much of their time on clinical placement.
The Vendors' Corner: Biblio-Techniques' Library and Information System (BLIS).
ERIC Educational Resources Information Center
Library Software Review, 1984
1984-01-01
Describes online catalog and integrated library computer system designed to enhance Washington Library Network's software. Highlights include system components; implementation options; system features (integrated library functions, database design, system management facilities); support services (installation and training, software maintenance and…
2,445 Hours of Code: What I Learned from Facilitating Hour of Code Events in High School Libraries
ERIC Educational Resources Information Center
Colby, Jennifer
2015-01-01
This article describes a school librarian's experience with initiating an Hour of Code event for her school's student body. Hadi Partovi of Code.org conceived the Hour of Code "to get ten million students to try one hour of computer science" (Partovi, 2013a), which is implemented during Computer Science Education Week with a goal of…
NASA Astrophysics Data System (ADS)
Ceres, M.; Heselton, L. R., III
1981-11-01
This manual describes the computer programs for the FIREFINDER Digital Topographic Data Verification-Library-Dubbing System (FFDTDVLDS), and will assist in the maintenance of these programs. The manual contains detailed flow diagrams and associated descriptions for each computer program routine and subroutine. Complete computer program listings are also included. This information should be used when changes are made in the computer programs. The operating system has been designed to minimize operator intervention.
Technology and the Modern Library.
ERIC Educational Resources Information Center
Boss, Richard W.
1984-01-01
Overview of the impact of information technology on libraries highlights turnkey vendors, bibliographic utilities, commercial suppliers of records, state and regional networks, computer-to-computer linkages, remote database searching, terminals and microcomputers, building local databases, delivery of information, digital telefacsimile,…
Modification and identification of a vector for making a large phage antibody library.
Zhang, Guo-min; Chen, Yü-ping; Guan, Yuan-zhi; Wang, Yan; An, Yun-qing
2007-11-20
The large phage antibody library is used to obtain high-affinity human antibody, and the Loxp/cre site-specific recombination system is a potential method for constructing a large phage antibody library. In the present study, a phage antibody library vector pDF was reconstructed to construct diabody more quickly and conveniently without injury to homologous recombination and the expression function of the vector and thus to integrate construction of the large phage antibody library with the preparation of diabodies. scFv was obtained by overlap polymerase chain reaction (PCR) amplification with the newly designed VL and VH extension primers. loxp511 was flanked by VL and VH and the endonuclease ACC III encoding sequences were introduced on both sides of loxp511. scFv was cloned into the vector pDF to obtain the vector pDscFv. The vector expression function was identified and the feasibility of diabody preparation was evaluated. A large phage antibody library was constructed in pDscFv. Several antigens were used to screen the antibody library and the quality of the antibody library was evaluated. The phage antibody library expression vector pDscFv was successfully constructed and confirmed to express functional scFv. The large phage antibody library constructed using this vector was of high diversity. Screening of the library on 6 antigens confirmed the generation of specific antibodies to these antigens. Two antibodies were subjected to enzymatic digestion and were prepared into diabody with functional expression. The reconstructed vector pDscFv retains its recombination capability and expression function and can be used to construct large phage antibody libraries. It can be used as a convenient and quick method for preparing diabodies after simple enzymatic digestion, which facilitates clinical trials and application of antibody therapy.
ERIC Educational Resources Information Center
Lubans, John, Jr.; And Others
Computer-based circulation systems, it is widely believed, can be utilized to provide data for library use studies. The study described in this report involves using such a data base to analyze aspects of library use and non-use and types of users. Another major objective of this research was the testing of machine-readable circulation data…
Close the Gate, Lock the Windows, Bolt the Doors: Securing Library Computers. Online Treasures
ERIC Educational Resources Information Center
Balas, Janet
2005-01-01
This article, written by a systems librarian at the Monroeville Public Library, discusses a major issue affecting all computer users, security. It indicates that while, staying up-to-date on the latest security issues has become essential for all computer users, it's more critical for network managers who are responsible for securing computer…
Touring the Campus Library from the World Wide Web.
ERIC Educational Resources Information Center
Mosley, Pixey Anne; Xiao, Daniel
1996-01-01
The philosophy, design, implementation and evaluation of a World Wide Web-accessible Virtual Library Tour of Texas A & M University's Evans Library is presented. Its design combined technical computer issues and library instruction expertise. The tour can be used to simulate a typical walking tour through the library or heading directly to a…
ERIC Educational Resources Information Center
Smyth, Carol B.; Grannell, Dorothy S.; Moore, Miriam
The Literacy Resource Center project, a program of the Wayne Township Public Library also known as the Morrisson-Reeves Library (Richmond, Indiana), involved recruitment, retention, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer-assisted, other technology, employment oriented,…
Paper Cuts Don't Hurt at the Gerstein Library
ERIC Educational Resources Information Center
Cunningham, Heather; Feder, Elah; Muise, Isaac
2010-01-01
The Gerstein Science Information Centre (Gerstein Library) is one of 40 libraries within the University of Toronto (U of T) and is the largest academic science and health science library in Canada. It offers 109 computers and two networked printers for student, staff, and faculty use. In addition, the library provides patrons' laptops with…
Radio Synthesis Imaging - A High Performance Computing and Communications Project
NASA Astrophysics Data System (ADS)
Crutcher, Richard M.
The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.
Open-Source Syringe Pump Library
Wijnen, Bas; Hunt, Emily J.; Anzalone, Gerald C.; Pearce, Joshua M.
2014-01-01
This article explores a new open-source method for developing and manufacturing high-quality scientific equipment suitable for use in virtually any laboratory. A syringe pump was designed using freely available open-source computer aided design (CAD) software and manufactured using an open-source RepRap 3-D printer and readily available parts. The design, bill of materials and assembly instructions are globally available to anyone wishing to use them. Details are provided covering the use of the CAD software and the RepRap 3-D printer. The use of an open-source Rasberry Pi computer as a wireless control device is also illustrated. Performance of the syringe pump was assessed and the methods used for assessment are detailed. The cost of the entire system, including the controller and web-based control interface, is on the order of 5% or less than one would expect to pay for a commercial syringe pump having similar performance. The design should suit the needs of a given research activity requiring a syringe pump including carefully controlled dosing of reagents, pharmaceuticals, and delivery of viscous 3-D printer media among other applications. PMID:25229451
Benchmarking, Total Quality Management, and Libraries.
ERIC Educational Resources Information Center
Shaughnessy, Thomas W.
1993-01-01
Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)
Hennig, Bianca P.; Velten, Lars; Racke, Ines; Tu, Chelsea Szu; Thoms, Matthias; Rybin, Vladimir; Besir, Hüseyin; Remans, Kim; Steinmetz, Lars M.
2017-01-01
Efficient preparation of high-quality sequencing libraries that well represent the biological sample is a key step for using next-generation sequencing in research. Tn5 enables fast, robust, and highly efficient processing of limited input material while scaling to the parallel processing of hundreds of samples. Here, we present a robust Tn5 transposase purification strategy based on an N-terminal His6-Sumo3 tag. We demonstrate that libraries prepared with our in-house Tn5 are of the same quality as those processed with a commercially available kit (Nextera XT), while they dramatically reduce the cost of large-scale experiments. We introduce improved purification strategies for two versions of the Tn5 enzyme. The first version carries the previously reported point mutations E54K and L372P, and stably produces libraries of constant fragment size distribution, even if the Tn5-to-input molecule ratio varies. The second Tn5 construct carries an additional point mutation (R27S) in the DNA-binding domain. This construct allows for adjustment of the fragment size distribution based on enzyme concentration during tagmentation, a feature that opens new opportunities for use of Tn5 in customized experimental designs. We demonstrate the versatility of our Tn5 enzymes in different experimental settings, including a novel single-cell polyadenylation site mapping protocol as well as ultralow input DNA sequencing. PMID:29118030
Microcomputers in the Anesthesia Library.
ERIC Educational Resources Information Center
Wright, A. J.
The combination of computer technology and library operation is helping to alleviate such library problems as escalating costs, increasing collection size, deteriorating materials, unwieldy arrangement schemes, poor subject control, and the acquisition and processing of large numbers of rarely used documents. Small special libraries such as…
BioSig: The Free and Open Source Software Library for Biomedical Signal Processing
Vidaurre, Carmen; Sander, Tilmann H.; Schlögl, Alois
2011-01-01
BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. Some of the areas where BioSig can be employed are neuroinformatics, brain-computer interfaces, neurophysiology, psychology, cardiovascular systems, and sleep research. Moreover, the analysis of biosignals such as the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), or respiration signals is a very relevant element of the BioSig project. Specifically, BioSig provides solutions for data acquisition, artifact processing, quality control, feature extraction, classification, modeling, and data visualization, to name a few. In this paper, we highlight several methods to help students and researchers to work more efficiently with biomedical signals. PMID:21437227
BioSig: the free and open source software library for biomedical signal processing.
Vidaurre, Carmen; Sander, Tilmann H; Schlögl, Alois
2011-01-01
BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. Some of the areas where BioSig can be employed are neuroinformatics, brain-computer interfaces, neurophysiology, psychology, cardiovascular systems, and sleep research. Moreover, the analysis of biosignals such as the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), or respiration signals is a very relevant element of the BioSig project. Specifically, BioSig provides solutions for data acquisition, artifact processing, quality control, feature extraction, classification, modeling, and data visualization, to name a few. In this paper, we highlight several methods to help students and researchers to work more efficiently with biomedical signals.
Cheminformatic Analysis of the US EPA ToxCast Chemical Library
The ToxCast project is employing high throughput screening (HTS) technologies, along with chemical descriptors and computational models, to develop approaches for screening and prioritizing environmental chemicals for further toxicity testing. ToxCast Phase I generated HTS data f...
A portable MPI-based parallel vector template library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C++ by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of C or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
A Portable MPI-Based Parallel Vector Template Library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C + + by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of c or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
The Future of School Library Media Centers.
ERIC Educational Resources Information Center
Craver, Kathleen W.
1984-01-01
Examines impact of technology on school library media program development and role of school librarian. Technological trends (computerized record keeping, computer-assisted instruction, networking, home computers, videodiscs), employment and economic trends, education of school librarians, social and behavioral trends, and organizational and…
Damsel: A Data Model Storage Library for Exascale Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koziol, Quincey
The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.
NASA Astrophysics Data System (ADS)
Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang
2018-05-01
In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.
G STL: the geostatistical template library in C++
NASA Astrophysics Data System (ADS)
Remy, Nicolas; Shtuka, Arben; Levy, Bruno; Caers, Jef
2002-10-01
The development of geostatistics has been mostly accomplished by application-oriented engineers in the past 20 years. The focus on concrete applications gave birth to many algorithms and computer programs designed to address different issues, such as estimating or simulating a variable while possibly accounting for secondary information such as seismic data, or integrating geological and geometrical data. At the core of any geostatistical data integration methodology is a well-designed algorithm. Yet, despite their obvious differences, all these algorithms share many commonalities on which to build a geostatistics programming library, lest the resulting library is poorly reusable and difficult to expand. Building on this observation, we design a comprehensive, yet flexible and easily reusable library of geostatistics algorithms in C++. The recent advent of the generic programming paradigm allows us elegantly to express the commonalities of the geostatistical algorithms into computer code. Generic programming, also referred to as "programming with concepts", provides a high level of abstraction without loss of efficiency. This last point is a major gain over object-oriented programming which often trades efficiency for abstraction. It is not enough for a numerical library to be reusable, it also has to be fast. Because generic programming is "programming with concepts", the essential step in the library design is the careful identification and thorough definition of these concepts shared by most geostatistical algorithms. Building on these definitions, a generic and expandable code can be developed. To show the advantages of such a generic library, we use G STL to build two sequential simulation programs working on two different types of grids—a surface with faults and an unstructured grid—without requiring any change to the G STL code.
Computer programs: Operational and mathematical, a compilation
NASA Technical Reports Server (NTRS)
1973-01-01
Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.
Closing the Gap: The Maturing of Quality Assurance in Australian University Libraries
ERIC Educational Resources Information Center
Tang, Karen
2012-01-01
A benchmarking review of the quality assurance practices of the libraries of the Australian Technology Network conducted in 2006 revealed exemplars of best practice, but also sector-wide gaps. A follow-up review in 2010 indicated the best practices that remain relevant. While some gaps persist, there has been improvement across the libraries and…
ERIC Educational Resources Information Center
Jankowska, Maria Anna; Hertel, Karen; Young, Nancy J.
2006-01-01
The LibQUAL+[TM] survey was conducted to determine user satisfaction and expectations concerning library service quality. The results of the "22 items and a box" constituted a rich source of information for the University of Idaho (UI) Library's strategic planning process. Focusing on graduate students, this study used three…
Structure of Perceptions of Service Quality in Libraries: A LibQUAL+ Study.
ERIC Educational Resources Information Center
Thompson, Bruce; Cook, Colleen; Heath, Fred
2003-01-01
Used confirmatory factor analysis to evaluate the score integrity of LibQUALl+, an instrument to measure perceptions of library service quality. Results for 60,027 graduate and undergraduate students suggest that the model implied by LibQUAL is reasonable and invariant across independent samples and fits all three major subgroups of library users.…
SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, S; Dolly, S; Cai, B
Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roper, J; Bradshaw, B; Godette, K
Purpose: To create a knowledge-based algorithm for prostate LDR brachytherapy treatment planning that standardizes plan quality using seed arrangements tailored to individual physician preferences while being fast enough for real-time planning. Methods: A dataset of 130 prior cases was compiled for a physician with an active prostate seed implant practice. Ten cases were randomly selected to test the algorithm. Contours from the 120 library cases were registered to a common reference frame. Contour variations were characterized on a point by point basis using principle component analysis (PCA). A test case was converted to PCA vectors using the same process andmore » then compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Computational time was recorded. Any subsequent modifications were recorded that required input from a treatment planner to achieve an acceptable plan. Results: The computational time required to register contours from a test case and evaluate PCA similarity across the library was approximately 10s. Five of the ten test cases did not require any seed additions, deletions, or moves to obtain an acceptable plan. The remaining five test cases required on average 4.2 seed modifications. The time to complete manual plan modifications was less than 30s in all cases. Conclusion: A knowledge-based treatment planning algorithm was developed for prostate LDR brachytherapy based on principle component analysis. Initial results suggest that this approach can be used to quickly create treatment plans that require few if any modifications by the treatment planner. In general, test case plans have seed arrangements which are very similar to prior cases, and thus are inherently tailored to physician preferences.« less
Science Projects | Akron-Summit County Public Library
Resources Suggest a Purchase Hours & Locations Main Library Mobile Services Ellet Fairlawn-Bath Training Print/Scan/Fax/Copy Assistive Services Borrower Services Library Computers Meeting Rooms Mobile ; Notable Student Resources Suggest a Purchase Hours & Locations Main Library Mobile Services Ellet
NASA Technical Reports Server (NTRS)
Rinker, Nancy A.
1994-01-01
The role of librarians today is drastically influenced by the changing nature of information and library services. The museum-like libraries of yesterday are a thing of the past: today's libraries are bustling with life, activity, and the sounds of new technologies. Libraries are replacing their paper card catalogs with state-of-the-art online systems, which provide faster and more comprehensive search capabilities. Even the resources themselves are changing. New formats for information, such as CD-ROM's, are becoming popular for all types of publications, from bibliographic tools to encyclopedias to electronic journals, even replacing print materials completely in some cases. Today it is almost impossible to walk into a library and find the information you need without coming into contact with at least one computer system. Librarians are not only struggling to keep up with the technological advancements of the day, but they are becoming information intermediaries: they must teach library users how to use all of the new systems and electronic resources. Not surprisingly, bibliographic instruction itself has taken on a new look and feel in these electronically advanced libraries. Many libraries are experimenting with the development of expert systems and other computer aided instruction interfaces for teaching patrons how to use the library and its resources. One popular type of interface in library instruction programs is hypertext, which utilizes 'stacks' or linked pages of information. Hypertext stacks can incorporate color graphics along with text to provide a more interesting interface and entice users into trying out the system. Another advantage of hypertext is that it is generally easy to use, even for those unfamiliar with computers. As such, it lends itself well to application in libraries, which often serve a broad range of clientele. This paper will discuss the design, development, and implementation of a hypertext library tour in a special library setting. The library featured in the electronic library tour is the National Aeronautics and Space Administration's Technical Library at Langley Research Center in Hampton, Virginia.
Interface for the documentation and compilation of a library of computer models in physiology.
Summers, R. L.; Montani, J. P.
1994-01-01
A software interface for the documentation and compilation of a library of computer models in physiology was developed. The interface is an interactive program built within a word processing template in order to provide ease and flexibility of documentation. A model editor within the interface directs the model builder as to standardized requirements for incorporating models into the library and provides the user with an index to the levels of documentation. The interface and accompanying library are intended to facilitate model development, preservation and distribution and will be available for public use. PMID:7950046
QCDLoop: A comprehensive framework for one-loop scalar integrals
NASA Astrophysics Data System (ADS)
Carrazza, Stefano; Ellis, R. Keith; Zanderighi, Giulia
2016-12-01
We present a new release of the QCDLoop library based on a modern object-oriented framework. We discuss the available new features such as the extension to the complex masses, the possibility to perform computations in double and quadruple precision simultaneously, and useful caching mechanisms to improve the computational speed. We benchmark the performance of the new library, and provide practical examples of phenomenological implementations by interfacing this new library to Monte Carlo programs.
OCTANET--an electronic library network: I. Design and development.
Johnson, M F; Pride, R B
1983-01-01
The design and development of the OCTANET system for networking among medical libraries in the midcontinental region is described. This system's features and configuration may be attributed, at least in part, to normal evolution of technology in library networking, remote access to computers, and development of machine-readable data bases. Current functions and services of the system are outlined and implications for future developments in computer-based networking are discussed. PMID:6860825
Computer Simulation of the Circulation Subsystem of a Library
ERIC Educational Resources Information Center
Shaw, W. M., Jr.
1975-01-01
When circulation data are used as input parameters for a computer simulation of a library's circulation subsystem, the results of the simulation provide information on book availability and delays. The model may be used to simulate alternative loan policies. (Author/LS)
Free Oscilloscope Web App Using a Computer Mic, Built-In Sound Library, or Your Own Files
ERIC Educational Resources Information Center
Ball, Edward; Ruiz, Frances; Ruiz, Michael J.
2017-01-01
We have developed an online oscilloscope program which allows users to see waveforms by utilizing their computer microphones, selecting from our library of over 30 audio files, and opening any *.mp3 or *.wav file on their computers. The oscilloscope displays real-time signals against time. The oscilloscope has been calibrated so one can make…
Carmichael, James V
2002-01-01
What do searchers find when they look for literature on homosexuality? This question has profound implications for older as well as younger gays in their coming out, as well as in their subsequent identity development. Library records provide credible data to answer the question, since they represent relatively free sources of information, unlike data from bookstores, publishers, and some World Wide Web sites. The records of WorldCat, the world's largest union database of library records, comprise over 30 million records listed in the Online Computer Library Center. For the purposes of the study, 18,757 records listed under "Homosexuality," "Gay Men," and "Gays" were downloaded; records for "Lesbian" and "Lesbians" were not examined. Findings of the study suggest that while there has indeed been considerable growth in terms of the quantity of gay literature produced since 1969, such gains may be offset by the deteriorating quality of cataloging copy, which makes the experience of browsing records a discouraging and confusing one.
EOSPEC: a complementary toolbox for MODTRAN calculations
NASA Astrophysics Data System (ADS)
Dion, Denis
2016-09-01
For more than a decade, Defence Research and Development Canada (DRDC) has been developing a Library of computer models for the calculations of atmospheric effects on EO-IR sensor performances. The Library, called EOSPEC-LIB (EO-IR Sensor PErformance Computation LIBrary) has been designed as a complement to MODTRAN, the radiative transfer code developed by the Air Force Research Laboratory and Spectral Science Inc. in the USA. The Library comprises modules for the definition of the atmospheric conditions, including aerosols, and provides modules for the calculation of turbulence and fine refraction effects. SMART (Suite for Multi-resolution Atmospheric Radiative Transfer), a key component of EOSPEC, allows one to perform fast computations of transmittances and radiances using MODTRAN through a wide-band correlated-k computational approach. In its most recent version, EOSPEC includes a MODTRAN toolbox whose functions help generate in a format compatible to MODTRAN 5 and 6 atmospheric and aerosol profiles, user-defined refracted optical paths and inputs for configuring the MODTRAN sea radiance (BRDF) model. The paper gives an overall description of the EOSPEC features and capacities. EOSPEC provides augmented capabilities for computations in the lower atmosphere, and for computations in maritime environments.
Chan, Conrad E Z; Chan, Annie H Y; Lim, Angeline P C; Hanson, Brendon J
2011-10-28
Rapid development of diagnostic immunoassays against novel emerging or genetically modified pathogens in an emergency situation is dependent on the timely isolation of specific antibodies. Non-immune antibody phage display libraries are an efficient in vitro method for selecting monoclonal antibodies and hence ideal in these circumstances. Such libraries can be constructed from a variety of sources e.g. B cell cDNA or synthetically generated, and use a variety of antibody formats, typically scFv or Fab. However, antibody source and format can impact on the quality of antibodies generated and hence the effectiveness of this methodology for the timely production of antibodies. We have carried out a comparative screening of two antibody libraries, a semi-synthetic scFv library and a human-derived Fab library against the protective antigen toxin component of Bacillus anthracis and the epsilon toxin of Clostridium botulinum. We have shown that while the synthetic library produced a diverse collection of specific scFv-phage, these contained a high frequency of unnatural amber stops and glycosylation sites which limited their conversion to IgG, and also a high number which lost specificity when expressed as IgG. In contrast, these limitations were overcome by the use of a natural human library. Antibodies from both libraries could be used to develop sandwich ELISA assays with similar sensitivity. However, the ease and speed with which full-length IgG could be generated from the human-derived Fab library makes screening this type of library the preferable method for rapid antibody generation for diagnostic assay development. Copyright © 2011 Elsevier B.V. All rights reserved.
Bell, Andrew S; Bradley, Joseph; Everett, Jeremy R; Knight, Michelle; Loesel, Jens; Mathias, John; McLoughlin, David; Mills, James; Sharp, Robert E; Williams, Christine; Wood, Terence P
2013-05-01
The screening files of many large companies, including Pfizer, have grown considerably due to internal chemistry efforts, company mergers and acquisitions, external contracted synthesis, or compound purchase schemes. In order to screen the targets of interest in a cost-effective fashion, we devised an easy-to-assemble, plate-based diversity subset (PBDS) that represents almost the entire computed chemical space of the screening file whilst comprising only a fraction of the plates in the collection. In order to create this file, we developed new design principles for the quality assessment of screening plates: the Rule of 40 (Ro40) and a plate selection process that insured excellent coverage of both library chemistry and legacy chemistry space. This paper describes the rationale, design, construction, and performance of the PBDS, that has evolved into the standard paradigm for singleton (one compound per well) high-throughput screening in Pfizer since its introduction in 2006.
The AI Bus architecture for distributed knowledge-based systems
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain
1991-01-01
The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.
Toward performance portability of the Albany finite element analysis code using the Kokkos library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
Toward performance portability of the Albany finite element analysis code using the Kokkos library
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...
2018-02-05
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
The Scalable Checkpoint/Restart Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.
The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less
Raster graphics display library
NASA Technical Reports Server (NTRS)
Grimsrud, Anders; Stephenson, Michael B.
1987-01-01
The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.
MyTeachingPartner-Secondary. What Works Clearinghouse Intervention Report [Revised
ERIC Educational Resources Information Center
What Works Clearinghouse, 2015
2015-01-01
MyTeachingPartner-Secondary (MTP-S) is a professional development program that aims to increase student learning and development through improved teacher-student interactions. Through the program, middle and high school teachers access a video library featuring examples of high-quality interactions and receive individualized, web-based coaching…
Computational scalability of large size image dissemination
NASA Astrophysics Data System (ADS)
Kooper, Rob; Bajcsy, Peter
2011-01-01
We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.
20180312 - Mechanistic Modeling of Developmental Defects through Computational Embryology (SOT)
Significant advances in the genome sciences, in automated high-throughput screening (HTS), and in alternative methods for testing enable rapid profiling of chemical libraries for quantitative effects on diverse cellular activities. While a surfeit of HTS data and information is n...
Defining Usability: How Library Practice Differs from Published Research
ERIC Educational Resources Information Center
Chen, Yu-Hui; Germain, Carol Anne; Rorissa, Abebe
2011-01-01
Library/information science professionals need a clearly articulated definition of usability/Web usability to implement intuitive websites. In this study, the authors analyzed usability definitions provided by the ARL library professionals and those found in the library/information science and computer science-information systems literature.…
Interlibrary Lending with Computerized Union Catalogues.
ERIC Educational Resources Information Center
Lehmann, Klaus-Dieter
Interlibrary loans in the Federal Republic of Germany are facilitated by applying techniques of data processing and computer output microfilm (COM) to the union catalogs of the national library system. The German library system consists of two national libraries, four central specialized libraries of technology, medicine, agriculture, and…
Technostress in Libraries: Causes, Effects and Solutions.
ERIC Educational Resources Information Center
Bichteler, Julie
1987-01-01
Examines some of the fears, frustrations, and misconceptions of library staff and patrons that hamper the effective use of computers in libraries. Strategies that library administrators could use to alleviate stress are outlined, including staff participation in the automation process, well-designed workstations, and adequate training for staff…
The computational structural mechanics testbed procedures manual
NASA Technical Reports Server (NTRS)
Stewart, Caroline B. (Compiler)
1991-01-01
The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allada, Veerendra, Benjegerdes, Troy; Bode, Brett
Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as themore » workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.« less
The Future of Catalogers and Cataloging.
ERIC Educational Resources Information Center
Holley, Robert P.
1981-01-01
Future emphasis in cataloging will be on the sharing of high quality bibliographic records through a national network. As original cataloging decreases, catalogers, rather than disappearing, will more likely be managers of the library's bibliographic control system. (Author/RAA)
... this page, please enable JavaScript. Produced by the world's largest medical library, MedlinePlus offers high-quality, up-to-date health information on over 1000 different diseases, conditions, and wellness issues. With over 1 million daily visitors, MedlinePlus provides ...
NASA Astrophysics Data System (ADS)
Sands, Michelle M.; Borrego, David; Maynard, Matthew R.; Bahadori, Amir A.; Bolch, Wesley E.
2017-11-01
One of the hazards faced by space crew members in low-Earth orbit or in deep space is exposure to ionizing radiation. It has been shown previously that while differences in organ-specific and whole-body risk estimates due to body size variations are small for highly-penetrating galactic cosmic rays, large differences in these quantities can result from exposure to shorter-range trapped proton or solar particle event radiations. For this reason, it is desirable to use morphometrically accurate computational phantoms representing each astronaut for a risk analysis, especially in the case of a solar particle event. An algorithm was developed to automatically sculpt and scale the UF adult male and adult female hybrid reference phantom to the individual outer body contour of a given astronaut. This process begins with the creation of a laser-measured polygon mesh model of the astronaut's body contour. Using the auto-scaling program and selecting several anatomical landmarks, the UF adult male or female phantom is adjusted to match the laser-measured outer body contour of the astronaut. A dosimetry comparison study was conducted to compare the organ dose accuracy of both the autoscaled phantom and that based upon a height-weight matched phantom from the UF/NCI Computational Phantom Library. Monte Carlo methods were used to simulate the environment of the August 1972 and February 1956 solar particle events. Using a series of individual-specific voxel phantoms as a local benchmark standard, autoscaled phantom organ dose estimates were shown to provide a 1% and 10% improvement in organ dose accuracy for a population of females and males, respectively, as compared to organ doses derived from height-weight matched phantoms from the UF/NCI Computational Phantom Library. In addition, this slight improvement in organ dose accuracy from the autoscaled phantoms is accompanied by reduced computer storage requirements and a more rapid method for individualized phantom generation when compared to the UF/NCI Computational Phantom Library.
A universal phage display system for the seamless construction of Fab libraries.
Nelson, Renae S; Valadon, Philippe
2017-11-01
The construction of Fab phage libraries requires the cloning of domains from both the light and the heavy chain of antibodies. Despite the advent of powerful strategies such as splicing-by-overlap extension PCR, obtaining high quality libraries with excellent coverage remains challenging. Here, we explored the use of type IIS restriction enzymes for the seamless cloning of Fab libraries. We analyzed human, murine and rabbit germline antibody repertoires and identified combinations of restriction enzymes that exhibit very few or no recognition sites in the antibody sequences. We describe three phagemid vectors, pUP-22Hb, pUP-22Mc and pUP-22Rc, which were employed for cloning the Fab repertoire of these hosts using BsmBI and SapI (human) or SapI alone (mouse and rabbit). Using human serum albumin as a model immunization, we built a mouse/human chimeric Fab library and a mouse Fab library in a single step ligation and successfully panned multiple cognate antibodies. The overall process is highly scalable and faster than PCR-based techniques, with a Fab insertion success rate of around 80%. By using carefully chosen overhangs on each end of the antibody domains, this approach paves the way to the universal, sequence- and vector-independent cloning and reformatting of antibody libraries. Copyright © 2017 Elsevier B.V. All rights reserved.
FRAGSION: ultra-fast protein fragment library generation by IOHMM sampling.
Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2016-07-01
Speed, accuracy and robustness of building protein fragment library have important implications in de novo protein structure prediction since fragment-based methods are one of the most successful approaches in template-free modeling (FM). Majority of the existing fragment detection methods rely on database-driven search strategies to identify candidate fragments, which are inherently time-consuming and often hinder the possibility to locate longer fragments due to the limited sizes of databases. Also, it is difficult to alleviate the effect of noisy sequence-based predicted features such as secondary structures on the quality of fragment. Here, we present FRAGSION, a database-free method to efficiently generate protein fragment library by sampling from an Input-Output Hidden Markov Model. FRAGSION offers some unique features compared to existing approaches in that it (i) is lightning-fast, consuming only few seconds of CPU time to generate fragment library for a protein of typical length (300 residues); (ii) can generate dynamic-size fragments of any length (even for the whole protein sequence) and (iii) offers ways to handle noise in predicted secondary structure during fragment sampling. On a FM dataset from the most recent Critical Assessment of Structure Prediction, we demonstrate that FGRAGSION provides advantages over the state-of-the-art fragment picking protocol of ROSETTA suite by speeding up computation by several orders of magnitude while achieving comparable performance in fragment quality. Source code and executable versions of FRAGSION for Linux and MacOS is freely available to non-commercial users at http://sysbio.rnet.missouri.edu/FRAGSION/ It is bundled with a manual and example data. chengji@missouri.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Automatic Publishing of Library Bulletins.
ERIC Educational Resources Information Center
Inbal, Moshe
1980-01-01
Describes the use of a computer to publish library bulletins that list recent accessions of technical reports according to the subject classification scheme of NTIS/SRIM (National Technical Information Service's Scientific Reports in Microfiche). The codes file, the four computer program functions, and costs/economy are discussed. (JD)
Protecting Public-Access Computers in Libraries.
ERIC Educational Resources Information Center
King, Monica
1999-01-01
Describes one public library's development of a computer-security plan, along with helpful products used. Discussion includes Internet policy, physical protection of hardware, basic protection of the operating system and software on the network, browser dilemmas and maintenance, creating clear intuitive interface, and administering fair use and…
Quantum supercharger library: hyper-parallelism of the Hartree-Fock method.
Fernandes, Kyle D; Renison, C Alicia; Naidoo, Kevin J
2015-07-05
We present here a set of algorithms that completely rewrites the Hartree-Fock (HF) computations common to many legacy electronic structure packages (such as GAMESS-US, GAMESS-UK, and NWChem) into a massively parallel compute scheme that takes advantage of hardware accelerators such as Graphical Processing Units (GPUs). The HF compute algorithm is core to a library of routines that we name the Quantum Supercharger Library (QSL). We briefly evaluate the QSL's performance and report that it accelerates a HF 6-31G Self-Consistent Field (SCF) computation by up to 20 times for medium sized molecules (such as a buckyball) when compared with mature Central Processing Unit algorithms available in the legacy codes in regular use by researchers. It achieves this acceleration by massive parallelization of the one- and two-electron integrals and optimization of the SCF and Direct Inversion in the Iterative Subspace routines through the use of GPU linear algebra libraries. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Public Libraries: Responding to Demand.
ERIC Educational Resources Information Center
Annichiarico, Mark; And Others
1993-01-01
Discussion of problems library wholesalers/distributors face trying to fulfill public libraries' needs while adjusting to a changing industry is based on responses by librarians to a survey on library jobbers. Increased services to libraries, electronic ordering, timeliness, stock management, and quality control are addressed; and a chart of…
2011-01-01
Genome targeting methods enable cost-effective capture of specific subsets of the genome for sequencing. We present here an automated, highly scalable method for carrying out the Solution Hybrid Selection capture approach that provides a dramatic increase in scale and throughput of sequence-ready libraries produced. Significant process improvements and a series of in-process quality control checkpoints are also added. These process improvements can also be used in a manual version of the protocol. PMID:21205303
Breaking the computational barriers of pairwise genome comparison.
Torreno, Oscar; Trelles, Oswaldo
2015-08-11
Conventional pairwise sequence comparison software algorithms are being used to process much larger datasets than they were originally designed for. This can result in processing bottlenecks that limit software capabilities or prevent full use of the available hardware resources. Overcoming the barriers that limit the efficient computational analysis of large biological sequence datasets by retrofitting existing algorithms or by creating new applications represents a major challenge for the bioinformatics community. We have developed C libraries for pairwise sequence comparison within diverse architectures, ranging from commodity systems to high performance and cloud computing environments. Exhaustive tests were performed using different datasets of closely- and distantly-related sequences that span from small viral genomes to large mammalian chromosomes. The tests demonstrated that our solution is capable of generating high quality results with a linear-time response and controlled memory consumption, being comparable or faster than the current state-of-the-art methods. We have addressed the problem of pairwise and all-versus-all comparison of large sequences in general, greatly increasing the limits on input data size. The approach described here is based on a modular out-of-core strategy that uses secondary storage to avoid reaching memory limits during the identification of High-scoring Segment Pairs (HSPs) between the sequences under comparison. Software engineering concepts were applied to avoid intermediate result re-calculation, to minimise the performance impact of input/output (I/O) operations and to modularise the process, thus enhancing application flexibility and extendibility. Our computationally-efficient approach allows tasks such as the massive comparison of complete genomes, evolutionary event detection, the identification of conserved synteny blocks and inter-genome distance calculations to be performed more effectively.
Mora-Castilla, Sergio; To, Cuong; Vaezeslami, Soheila; Morey, Robert; Srinivasan, Srimeenakshi; Dumdie, Jennifer N; Cook-Andersen, Heidi; Jenkins, Joby; Laurent, Louise C
2016-08-01
As the cost of next-generation sequencing has decreased, library preparation costs have become a more significant proportion of the total cost, especially for high-throughput applications such as single-cell RNA profiling. Here, we have applied novel technologies to scale down reaction volumes for library preparation. Our system consisted of in vitro differentiated human embryonic stem cells representing two stages of pancreatic differentiation, for which we prepared multiple biological and technical replicates. We used the Fluidigm (San Francisco, CA) C1 single-cell Autoprep System for single-cell complementary DNA (cDNA) generation and an enzyme-based tagmentation system (Nextera XT; Illumina, San Diego, CA) with a nanoliter liquid handler (mosquito HTS; TTP Labtech, Royston, UK) for library preparation, reducing the reaction volume down to 2 µL and using as little as 20 pg of input cDNA. The resulting sequencing data were bioinformatically analyzed and correlated among the different library reaction volumes. Our results showed that decreasing the reaction volume did not interfere with the quality or the reproducibility of the sequencing data, and the transcriptional data from the scaled-down libraries allowed us to distinguish between single cells. Thus, we have developed a process to enable efficient and cost-effective high-throughput single-cell transcriptome sequencing. © 2016 Society for Laboratory Automation and Screening.
GALARIO: a GPU accelerated library for analysing radio interferometer observations
NASA Astrophysics Data System (ADS)
Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo
2018-06-01
We present GALARIO, a computational library that exploits the power of modern graphical processing units (GPUs) to accelerate the analysis of observations from radio interferometers like Atacama Large Millimeter and sub-millimeter Array or the Karl G. Jansky Very Large Array. GALARIO speeds up the computation of synthetic visibilities from a generic 2D model image or a radial brightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster than standard PYTHON and 10 times faster than serial C++ code on a CPU. Highly modular, easy to use, and to adopt in existing code, GALARIO comes as two compiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both have the same functions with identical interfaces. GALARIO comes with PYTHON bindings but can also be directly used in C or C++. The versatility and the speed of GALARIO open new analysis pathways that otherwise would be prohibitively time consuming, e.g. fitting high-resolution observations of large number of objects, or entire spectral cubes of molecular gas emission. It is a general tool that can be applied to any field that uses radio interferometer observations. The source code is available online at http://github.com/mtazzari/galario under the open source GNU Lesser General Public License v3.
The Case for Quality Book Selection.
ERIC Educational Resources Information Center
Bob, Murray C.
1982-01-01
This essay on library book selection critiques Nora Rawlinson's article on practices at the Baltimore County Public Library which appeared in Library Journal, November 15, 1981, p. 2188, and discusses library circulation statistics in relation to book selection. (EJS)
Student project of optical system analysis API-library development
NASA Astrophysics Data System (ADS)
Ivanova, Tatiana; Zhukova, Tatiana; Dantcaranov, Ruslan; Romanova, Maria; Zhadin, Alexander; Ivanov, Vyacheslav; Kalinkina, Olga
2017-08-01
In the paper API-library software developed by students of Applied and Computer Optics Department (ITMO University) for optical system design is presented. The library performs paraxial and real ray tracing, calculates 3d order (Seidel) aberration and real ray aberration of axis and non-axis beams (wave, lateral, longitudinal, coma, distortion etc.) and finally, approximate wave aberration by Zernike polynomials. Real aperture can be calculated by considering of real rays tracing failure on each surface. So far we assume optical system is centered, with spherical or 2d order aspherical surfaces. Optical glasses can be set directly by refraction index or by dispersion coefficients. The library can be used for education or research purposes in optical system design area. It provides ready to use software functions for optical system simulation and analysis that developer can simply plug into their software development for different purposes, for example for some specific synthesis tasks or investigation of new optimization modes. In the paper we present an example of using the library for development of cemented doublet synthesis software based on Slusarev's methodology. The library is used in optical system optimization recipes course for deep studying of optimization model and its application for optical system design. Development of such software is an excellent experience for students and help to understanding optical image modeling and quality analysis. This development is organized as student group joint project. We try to organize it as a group in real research and development project, so each student has his own role in the project and then use whole library functionality in his own master or bachelor thesis. Working in such group gives students useful experience and opportunity to work as research and development engineer of scientific software in the future.
Amber Plug-In for Protein Shop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliva, Ricardo
2004-05-10
The Amber Plug-in for ProteinShop has two main components: an AmberEngine library to compute the protein energy models, and a module to solve the energy minimization problem using an optimization algorithm in the OPTI-+ library. Together, these components allow the visualization of the protein folding process in ProteinShop. AmberEngine is a object-oriented library to compute molecular energies based on the Amber model. The main class is called ProteinEnergy. Its main interface methods are (1) "init" to initialize internal variables needed to compute the energy. (2) "eval" to evaluate the total energy given a vector of coordinates. Additional methods allow themore » user to evaluate the individual components of the energy model (bond, angle, dihedral, non-bonded-1-4, and non-bonded energies) and to obtain the energy of each individual atom. The Amber Engine library source code includes examples and test routines that illustrate the use of the library in stand alone programs. The energy minimization module uses the AmberEngine library and the nonlinear optimization library OPT++. OPT++ is open source software available under the GNU Lesser General Public License. The minimization module currently makes use of the LBFGS optimization algorithm in OPT++ to perform the energy minimization. Future releases may give the user a choice of other algorithms available in OPT++.« less
Quality Assurance in Distance Learning Libraries
ERIC Educational Resources Information Center
Tripathi, Manorama; Jeevan, V. K. J.
2009-01-01
Purpose: The paper aims to study how the present distance learning libraries can improve upon their existing services and introduce new ones to enhance quality of services to distance learners. Design/methodology/approach: The paper includes a review of literature on quality assurance in open and distance education in general and student support…
Jones, Josette; Harris, Marcelline; Bagley-Thompson, Cheryl; Root, Jane
2003-01-01
This poster describes the development of user-centered interfaces in order to extend the functionality of the Virginia Henderson International Nursing Library (VHINL) from library to web based portal to nursing knowledge resources. The existing knowledge structure and computational models are revised and made complementary. Nurses' search behavior is captured and analyzed, and the resulting search models are mapped to the revised knowledge structure and computational model.
2015-06-01
10-2014 to 00-11-2014 4. TITLE AND SUBTITLE Postprocessing of Voxel-Based Topologies for Additive Manufacturing Using the Computational Geometry...ABSTRACT Postprocessing of 3-dimensional (3-D) topologies that are defined as a set of voxels using the Computational Geometry Algorithms Library (CGAL... computational geometry algorithms, several of which are suited to the task. The work flow described in this report involves first defining a set of
Bietz, Stefan; Inhester, Therese; Lauck, Florian; Sommer, Kai; von Behren, Mathias M; Fährrolfes, Rainer; Flachsenberg, Florian; Meyder, Agnes; Nittinger, Eva; Otto, Thomas; Hilbig, Matthias; Schomburg, Karen T; Volkamer, Andrea; Rarey, Matthias
2017-11-10
Nowadays, computational approaches are an integral part of life science research. Problems related to interpretation of experimental results, data analysis, or visualization tasks highly benefit from the achievements of the digital era. Simulation methods facilitate predictions of physicochemical properties and can assist in understanding macromolecular phenomena. Here, we will give an overview of the methods developed in our group that aim at supporting researchers from all life science areas. Based on state-of-the-art approaches from structural bioinformatics and cheminformatics, we provide software covering a wide range of research questions. Our all-in-one web service platform ProteinsPlus (http://proteins.plus) offers solutions for pocket and druggability prediction, hydrogen placement, structure quality assessment, ensemble generation, protein-protein interaction classification, and 2D-interaction visualization. Additionally, we provide a software package that contains tools targeting cheminformatics problems like file format conversion, molecule data set processing, SMARTS editing, fragment space enumeration, and ligand-based virtual screening. Furthermore, it also includes structural bioinformatics solutions for inverse screening, binding site alignment, and searching interaction patterns across structure libraries. The software package is available at http://software.zbh.uni-hamburg.de. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Building a Library Network from Scratch: Eric & Veronica's Excellent Adventure.
ERIC Educational Resources Information Center
Sisler, Eric; Smith, Veronica
2000-01-01
Describes library automation issues during the planning and construction of College Hill Library (Colorado), a joint-use facility shared by a community college and a public library. Discuses computer networks; hardware selection; public access to catalogs and electronic resources; classification schemes and bibliographic data; children's…
Small but Pristine--Lessons for Small Library Automation.
ERIC Educational Resources Information Center
Clement, Russell; Robertson, Dane
1990-01-01
Compares the more positive library automation experiences of a small public library with those of a large research library. Topics addressed include collection size; computer size and the need for outside control of a data processing center; staff size; selection process for hardware and software; and accountability. (LRW)
The Library Macintosh at SCIL [Small Computers in Libraries]'88.
ERIC Educational Resources Information Center
Valauskas, Edward J.; And Others
1988-01-01
The first of three papers describes the role of Macintosh workstations in a library. The second paper explains why the Macintosh was selected for end-user searching in an academic library, and the third discusses advantages and disadvantages of desktop publishing for librarians. (8 references) (MES)
Services to Remote Users: Marketing the Library's Role.
ERIC Educational Resources Information Center
Wolpert, Ann
1998-01-01
Discussion of the impact of distance education on academic libraries focuses on marketing aspects. Topics include the rapid expansion of educational computing; the maturing of higher education; the World Wide Web as competitor to academic libraries; business purposes of academic libraries; distance education strategies; servicing market segments;…
The Evolving Virtual Library: Visions and Case Studies.
ERIC Educational Resources Information Center
Saunders, Laverna M., Ed.
This book addresses many of the practical issues involved in developing the virtual library. Seven presentations from the Eighth Annual Computers in Libraries Conference are included in this book in augmented form. The papers are supplemented by "The Evolving Virtual Library: An Overview" (Laverna M. Saunders and Maurice Mitchell), a…
Library Automation: A "First Course" Teaching Syllabus.
ERIC Educational Resources Information Center
Dyson, Sam A.
This syllabus for a basic course in library automation is designed for advanced library students and practicing librarians. It is intended not to make librarians and students qualified programmers, but to give them enough background information for intelligent discussion of library problems with computer personnel. It may also stimulate the…
Are We Ready for the Virtual Library? Technology Push, Market Pull and Organisational Response.
ERIC Educational Resources Information Center
Gilbert, J. D.
1993-01-01
Discusses virtual libraries, i.e., library services available to users via personal computers; considers the issues of technological development, user demands, and organizational response; and describes progress toward virtual libraries in the Netherlands, including networks, online systems, navigation tools, subject classification, coordination…
ERIC Educational Resources Information Center
MacCabe, Bruce
The Literacy Learning Center Project, a project of the Meriden Public Library (Connecticut), targeted the educationally underserved and functionally illiterate, and involved recruitment, retention, space renovation, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer assisted services, and…
Library Media Learning and Play Center.
ERIC Educational Resources Information Center
Faber, Therese; And Others
Preschool educators developed a library media learning and play center to enable children to "experience" a library; establish positive attitudes about the library; and encourage respect for self, others, and property. The center had the following areas: check-in and check-out desk, quiet reading section, computer center, listening center, video…
Macintoshed Libraries 4. Fourth Edition.
ERIC Educational Resources Information Center
Valauskas, Edward J., Ed.; Vaccaro, Bill, Ed.
This annual collection contains the following 14 papers about the use of Macintosh computers in libraries: "Of Mice and Macs: The Integration of the Macintosh into the Operations and Services of the University of Tennessee, Memphis Health Science Library" (Lois M. Bellamy); "Networking Reference CD-Roms in the Apple Library"…
Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.
Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W
2016-03-31
Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
Biomathematical Description of Synthetic Peptide Libraries
Trepel, Martin
2015-01-01
Libraries of randomised peptides displayed on phages or viral particles are essential tools in a wide spectrum of applications. However, there is only limited understanding of a library's fundamental dynamics and the influences of encoding schemes and sizes on their quality. Numeric properties of libraries, such as the expected number of different peptides and the library's coverage, have long been in use as measures of a library's quality. Here, we present a graphical framework of these measures together with a library's relative efficiency to help to describe libraries in enough detail for researchers to plan new experiments in a more informed manner. In particular, these values allow us to answer-in a probabilistic fashion-the question of whether a specific library does indeed contain one of the "best" possible peptides. The framework is implemented in a web-interface based on two packages, discreteRV and peptider, to the statistical software environment R. We further provide a user-friendly web-interface called PeLiCa (Peptide Library Calculator, http://www.pelica.org), allowing scientists to plan and analyse their peptide libraries. PMID:26042419