Sample records for reduced representation libraries

  1. Target recognition for ladar range image using slice image

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Wang, Liang

    2015-12-01

    A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.

  2. Technical Considerations for Reduced Representation Bisulfite Sequencing with Multiplexed Libraries

    PubMed Central

    Chatterjee, Aniruddha; Rodger, Euan J.; Stockwell, Peter A.; Weeks, Robert J.; Morison, Ian M.

    2012-01-01

    Reduced representation bisulfite sequencing (RRBS), which couples bisulfite conversion and next generation sequencing, is an innovative method that specifically enriches genomic regions with a high density of potential methylation sites and enables investigation of DNA methylation at single-nucleotide resolution. Recent advances in the Illumina DNA sample preparation protocol and sequencing technology have vastly improved sequencing throughput capacity. Although the new Illumina technology is now widely used, the unique challenges associated with multiplexed RRBS libraries on this platform have not been previously described. We have made modifications to the RRBS library preparation protocol to sequence multiplexed libraries on a single flow cell lane of the Illumina HiSeq 2000. Furthermore, our analysis incorporates a bioinformatics pipeline specifically designed to process bisulfite-converted sequencing reads and evaluate the output and quality of the sequencing data generated from the multiplexed libraries. We obtained an average of 42 million paired-end reads per sample for each flow-cell lane, with a high unique mapping efficiency to the reference human genome. Here we provide a roadmap of modifications, strategies, and trouble shooting approaches we implemented to optimize sequencing of multiplexed libraries on an a RRBS background. PMID:23193365

  3. A new single-nucleotide polymorphism database for rainbow trout generated through whole genome re-sequencing

    USDA-ARS?s Scientific Manuscript database

    Single-nucleotide polymorphisms (SNPs) are highly abundant markers, which are broadly distributed in animal genomes. For rainbow trout, SNP discovery has been done through sequencing of restriction-site associated DNA (RAD) libraries, reduced representation libraries (RRL), RNA sequencing, and whole...

  4. A new single-nucleotide polymorphisms database for rainbow trout generated through whole genome resequencing of selected samples

    USDA-ARS?s Scientific Manuscript database

    Single-nucleotide polymorphisms (SNPs) are highly abundant markers, which are broadly distributed in animal genomes. For rainbow trout, SNP discovery has been done through sequencing of restriction-site associated DNA (RAD) libraries, reduced representation libraries (RRL), RNA sequencing, and whole...

  5. A new strategy for genome assembly using short sequence reads and reduced representation libraries.

    PubMed

    Young, Andrew L; Abaan, Hatice Ozel; Zerbino, Daniel; Mullikin, James C; Birney, Ewan; Margulies, Elliott H

    2010-02-01

    We have developed a novel approach for using massively parallel short-read sequencing to generate fast and inexpensive de novo genomic assemblies comparable to those generated by capillary-based methods. The ultrashort (<100 base) sequences generated by this technology pose specific biological and computational challenges for de novo assembly of large genomes. To account for this, we devised a method for experimentally partitioning the genome using reduced representation (RR) libraries prior to assembly. We use two restriction enzymes independently to create a series of overlapping fragment libraries, each containing a tractable subset of the genome. Together, these libraries allow us to reassemble the entire genome without the need of a reference sequence. As proof of concept, we applied this approach to sequence and assembled the majority of the 125-Mb Drosophila melanogaster genome. We subsequently demonstrate the accuracy of our assembly method with meaningful comparisons against the current available D. melanogaster reference genome (dm3). The ease of assembly and accuracy for comparative genomics suggest that our approach will scale to future mammalian genome-sequencing efforts, saving both time and money without sacrificing quality.

  6. Single nucleotide polymorphism discovery in rainbow trout by deep sequencing of a reduced representation library.

    PubMed

    Sánchez, Cecilia Castaño; Smith, Timothy P L; Wiedmann, Ralph T; Vallejo, Roger L; Salem, Mohamed; Yao, Jianbo; Rexroad, Caird E

    2009-11-25

    To enhance capabilities for genomic analyses in rainbow trout, such as genomic selection, a large suite of polymorphic markers that are amenable to high-throughput genotyping protocols must be identified. Expressed Sequence Tags (ESTs) have been used for single nucleotide polymorphism (SNP) discovery in salmonids. In those strategies, the salmonid semi-tetraploid genomes often led to assemblies of paralogous sequences and therefore resulted in a high rate of false positive SNP identification. Sequencing genomic DNA using primers identified from ESTs proved to be an effective but time consuming methodology of SNP identification in rainbow trout, therefore not suitable for high throughput SNP discovery. In this study, we employed a high-throughput strategy that used pyrosequencing technology to generate data from a reduced representation library constructed with genomic DNA pooled from 96 unrelated rainbow trout that represent the National Center for Cool and Cold Water Aquaculture (NCCCWA) broodstock population. The reduced representation library consisted of 440 bp fragments resulting from complete digestion with the restriction enzyme HaeIII; sequencing produced 2,000,000 reads providing an average 6 fold coverage of the estimated 150,000 unique genomic restriction fragments (300,000 fragment ends). Three independent data analyses identified 22,022 to 47,128 putative SNPs on 13,140 to 24,627 independent contigs. A set of 384 putative SNPs, randomly selected from the sets produced by the three analyses were genotyped on individual fish to determine the validation rate of putative SNPs among analyses, distinguish apparent SNPs that actually represent paralogous loci in the tetraploid genome, examine Mendelian segregation, and place the validated SNPs on the rainbow trout linkage map. Approximately 48% (183) of the putative SNPs were validated; 167 markers were successfully incorporated into the rainbow trout linkage map. In addition, 2% of the sequences from the validated markers were associated with rainbow trout transcripts. The use of reduced representation libraries and pyrosequencing technology proved to be an effective strategy for the discovery of a high number of putative SNPs in rainbow trout; however, modifications to the technique to decrease the false discovery rate resulting from the evolutionary recent genome duplication would be desirable.

  7. Vector-Based Ground Surface and Object Representation Using Cameras

    DTIC Science & Technology

    2009-12-01

    representations and it is a digital data structure used for the representation of a ground surface in geographical information systems ( GIS ). Figure...Vision API library, and the OpenCV library. Also, the Posix thread library was utilized to quickly capture the source images from cameras. Both

  8. Mapping the zebrafish brain methylome using reduced representation bisulfite sequencing

    PubMed Central

    Chatterjee, Aniruddha; Ozaki, Yuichi; Stockwell, Peter A; Horsfield, Julia A; Morison, Ian M; Nakagawa, Shinichi

    2013-01-01

    Reduced representation bisulfite sequencing (RRBS) has been used to profile DNA methylation patterns in mammalian genomes such as human, mouse and rat. The methylome of the zebrafish, an important animal model, has not yet been characterized at base-pair resolution using RRBS. Therefore, we evaluated the technique of RRBS in this model organism by generating four single-nucleotide resolution DNA methylomes of adult zebrafish brain. We performed several simulations to show the distribution of fragments and enrichment of CpGs in different in silico reduced representation genomes of zebrafish. Four RRBS brain libraries generated 98 million sequenced reads and had higher frequencies of multiple mapping than equivalent human RRBS libraries. The zebrafish methylome indicates there is higher global DNA methylation in the zebrafish genome compared with its equivalent human methylome. This observation was confirmed by RRBS of zebrafish liver. High coverage CpG dinucleotides are enriched in CpG island shores more than in the CpG island core. We found that 45% of the mapped CpGs reside in gene bodies, and 7% in gene promoters. This analysis provides a roadmap for generating reproducible base-pair level methylomes for zebrafish using RRBS and our results provide the first evidence that RRBS is a suitable technique for global methylation analysis in zebrafish. PMID:23975027

  9. Three-Dimensional Dispaly Of Document Set

    DOEpatents

    Lantrip, David B.; Pennock, Kelly A.; Pottier, Marc C.; Schur, Anne; Thomas, James J.; Wise, James A.

    2003-06-24

    A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.

  10. Three-dimensional display of document set

    DOEpatents

    Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA

    2006-09-26

    A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may e transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.

  11. Three-dimensional display of document set

    DOEpatents

    Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA

    2001-10-02

    A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.

  12. Three-dimensional display of document set

    DOEpatents

    Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA; York, Jeremy [Bothell, WA

    2009-06-30

    A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.

  13. Perspectives on ... Multiculturalism and Library Exhibits: Sites of Contested Representation

    ERIC Educational Resources Information Center

    Reece, Gwendolyn J.

    2005-01-01

    This article analyzes a multicultural library exhibit presenting the Palestinian/Israeli conflict as a site of contested representation. Qualitative methodology is used to interrogate the exhibit and its audience reception. Drawing on insights from critical pedagogy, implications for libraries arising from this case study are given and suggestions…

  14. Real time gamma-ray signature identifier

    DOEpatents

    Rowland, Mark [Alamo, CA; Gosnell, Tom B [Moraga, CA; Ham, Cheryl [Livermore, CA; Perkins, Dwight [Livermore, CA; Wong, James [Dublin, CA

    2012-05-15

    A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.

  15. Phased genotyping-by-sequencing enhances analysis of genetic diversity and reveals divergent copy number variants in maize

    USDA-ARS?s Scientific Manuscript database

    High-throughput sequencing of reduced representation genomic libraries has ushered in an era of genotyping-by-sequencing (GBS), where genome-wide genotype data can be obtained for nearly any species. However, there remains a need for imputation-free GBS methods for genotyping large samples taken fr...

  16. Comparison of large-insert, small-insert and pyrosequencing libraries for metagenomic analysis.

    PubMed

    Danhorn, Thomas; Young, Curtis R; DeLong, Edward F

    2012-11-01

    The development of DNA sequencing methods for characterizing microbial communities has evolved rapidly over the past decades. To evaluate more traditional, as well as newer methodologies for DNA library preparation and sequencing, we compared fosmid, short-insert shotgun and 454 pyrosequencing libraries prepared from the same metagenomic DNA samples. GC content was elevated in all fosmid libraries, compared with shotgun and 454 libraries. Taxonomic composition of the different libraries suggested that this was caused by a relative underrepresentation of dominant taxonomic groups with low GC content, notably Prochlorales and the SAR11 cluster, in fosmid libraries. While these abundant taxa had a large impact on library representation, we also observed a positive correlation between taxon GC content and fosmid library representation in other low-GC taxa, suggesting a general trend. Analysis of gene category representation in different libraries indicated that the functional composition of a library was largely a reflection of its taxonomic composition, and no additional systematic biases against particular functional categories were detected at the level of sequencing depth in our samples. Another important but less predictable factor influencing the apparent taxonomic and functional library composition was the read length afforded by the different sequencing technologies. Our comparisons and analyses provide a detailed perspective on the influence of library type on the recovery of microbial taxa in metagenomic libraries and underscore the different uses and utilities of more traditional, as well as contemporary 'next-generation' DNA library construction and sequencing technologies for exploring the genomics of the natural microbial world.

  17. High-Throughput SNP Discovery through Deep Resequencing of a Reduced Representation Library to Anchor and Orient Scaffolds in the Soybean Whole Genome Sequence

    USDA-ARS?s Scientific Manuscript database

    The soybean Consensus Map 4.0 facilitated the anchoring of 95.6% of the soybean whole genome sequence developed by the Joint Genome Institute, Department of Energy but only properly oriented 66% of the sequence scaffolds. To find additional single nucleotide polymorphism (SNP) markers for additiona...

  18. libdrdc: software standards library

    NASA Astrophysics Data System (ADS)

    Erickson, David; Peng, Tie

    2008-04-01

    This paper presents the libdrdc software standards library including internal nomenclature, definitions, units of measure, coordinate reference frames, and representations for use in autonomous systems research. This library is a configurable, portable C-function wrapped C++ / Object Oriented C library developed to be independent of software middleware, system architecture, processor, or operating system. It is designed to use the automatically-tuned linear algebra suite (ATLAS) and Basic Linear Algebra Suite (BLAS) and port to firmware and software. The library goal is to unify data collection and representation for various microcontrollers and Central Processing Unit (CPU) cores and to provide a common Application Binary Interface (ABI) for research projects at all scales. The library supports multi-platform development and currently works on Windows, Unix, GNU/Linux, and Real-Time Executive for Multiprocessor Systems (RTEMS). This library is made available under LGPL version 2.1 license.

  19. Probe-Directed Degradation (PDD) for Flexible Removal of Unwanted cDNA Sequences from RNA-Seq Libraries.

    PubMed

    Archer, Stuart K; Shirokikh, Nikolay E; Preiss, Thomas

    2015-04-01

    Most applications for RNA-seq require the depletion of abundant transcripts to gain greater coverage of the underlying transcriptome. The sequences to be targeted for depletion depend on application and species and in many cases may not be supported by commercial depletion kits. This unit describes a method for generating RNA-seq libraries that incorporates probe-directed degradation (PDD), which can deplete any unwanted sequence set, with the low-bias split-adapter method of library generation (although many other library generation methods are in principle compatible). The overall strategy is suitable for applications requiring customized sequence depletion or where faithful representation of fragment ends and lack of sequence bias is paramount. We provide guidelines to rapidly design specific probes against the target sequence, and a detailed protocol for library generation using the split-adapter method including several strategies for streamlining the technique and reducing adapter dimer content. Copyright © 2015 John Wiley & Sons, Inc.

  20. Israeli Special Libraries

    ERIC Educational Resources Information Center

    Foster, Barbara

    1974-01-01

    Israel is sprinkled with a noteworthy representation of special libraries which run the gamut from modest kibbutz efforts to highly technical scientific and humanities libraries. A few examples are discussed here. (Author/CH)

  1. Phage display peptide libraries: deviations from randomness and correctives

    PubMed Central

    Ryvkin, Arie; Ashkenazy, Haim; Weiss-Ottolenghi, Yael; Piller, Chen; Pupko, Tal; Gershoni, Jonathan M

    2018-01-01

    Abstract Peptide-expressing phage display libraries are widely used for the interrogation of antibodies. Affinity selected peptides are then analyzed to discover epitope mimetics, or are subjected to computational algorithms for epitope prediction. A critical assumption for these applications is the random representation of amino acids in the initial naïve peptide library. In a previous study, we implemented next generation sequencing to evaluate a naïve library and discovered severe deviations from randomness in UAG codon over-representation as well as in high G phosphoramidite abundance causing amino acid distribution biases. In this study, we demonstrate that the UAG over-representation can be attributed to the burden imposed on the phage upon the assembly of the recombinant Protein 8 subunits. This was corrected by constructing the libraries using supE44-containing bacteria which suppress the UAG driven abortive termination. We also demonstrate that the overabundance of G stems from variant synthesis-efficiency and can be corrected using compensating oligonucleotide-mixtures calibrated by mass spectroscopy. Construction of libraries implementing these correctives results in markedly improved libraries that display random distribution of amino acids, thus ensuring that enriched peptides obtained in biopanning represent a genuine selection event, a fundamental assumption for phage display applications. PMID:29420788

  2. Design of a Digital Library for Human Movement.

    ERIC Educational Resources Information Center

    Ben-Arie, Jezekiel; Pandit, Purvin; Rajaram, ShyamSundar

    This paper is focused on a central aspect in the design of a planned digital library for human movement, i.e. on the aspect of representation and recognition of human activity from video data. The method of representation is important since it has a major impact on the design of all the other building blocks of the system such as the user…

  3. The Legacy of the Baroque in Virtual Representations of Library Space

    ERIC Educational Resources Information Center

    Garrett, Jeffrey

    2004-01-01

    Library home pages and digital library sites have many properties and purposes in common with the Baroque wall-system libraries of seventeenth- and eighteenth-century Europe. Like their Baroque antecedents, contemporary library Web sites exploit the moment of entrance and the experience of the threshold to create and sustain the illusion of a…

  4. Characteristics of knowledge content in a curated online evidence library.

    PubMed

    Varada, Sowmya; Lacson, Ronilda; Raja, Ali S; Ip, Ivan K; Schneider, Louise; Osterbur, David; Bain, Paul; Vetrano, Nicole; Cellini, Jacqueline; Mita, Carol; Coletti, Margaret; Whelan, Julia; Khorasani, Ramin

    2018-05-01

    To describe types of recommendations represented in a curated online evidence library, report on the quality of evidence-based recommendations pertaining to diagnostic imaging exams, and assess underlying knowledge representation. The evidence library is populated with clinical decision rules, professional society guidelines, and locally developed best practice guidelines. Individual recommendations were graded based on a standard methodology and compared using chi-square test. Strength of evidence ranged from grade 1 (systematic review) through grade 5 (recommendations based on expert opinion). Finally, variations in the underlying representation of these recommendations were identified. The library contains 546 individual imaging-related recommendations. Only 15% (16/106) of recommendations from clinical decision rules were grade 5 vs 83% (526/636) from professional society practice guidelines and local best practice guidelines that cited grade 5 studies (P < .0001). Minor head trauma, pulmonary embolism, and appendicitis were topic areas supported by the highest quality of evidence. Three main variations in underlying representations of recommendations were "single-decision," "branching," and "score-based." Most recommendations were grade 5, largely because studies to test and validate many recommendations were absent. Recommendation types vary in amount and complexity and, accordingly, the structure and syntax of statements they generate. However, they can be represented in single-decision, branching, and score-based representations. In a curated evidence library with graded imaging-based recommendations, evidence quality varied widely, with decision rules providing the highest-quality recommendations. The library may be helpful in highlighting evidence gaps, comparing recommendations from varied sources on similar clinical topics, and prioritizing imaging recommendations to inform clinical decision support implementation.

  5. Using geographic information systems to identify prospective marketing areas for a special library.

    PubMed

    McConnaughy, Rozalynd P; Wilson, Steven P

    2006-05-04

    The Center for Disability Resources (CDR) Library is the largest collection of its kind in the Southeastern United States, consisting of over 5,200 books, videos/DVDs, brochures, and audiotapes covering a variety of disability-related topics, from autism to transition resources. The purpose of the library is to support the information needs of families, faculty, students, staff, and other professionals in South Carolina working with individuals with disabilities. The CDR Library is funded on a yearly basis; therefore, maintaining high usage is crucial. A variety of promotional efforts have been used to attract new patrons to the library. Anyone in South Carolina can check out materials from the library, and most of the patrons use the library remotely by requesting materials, which are then mailed to them. The goal of this project was to identify areas of low geographic usage as a means of identifying locations for future library marketing efforts. Nearly four years worth of library statistics were compiled in a spreadsheet that provided information per county on the number of checkouts, the number of renewals, and the population. Five maps were created using ArcView GIS software to create visual representations of patron checkout and renewal behavior per county. Out of the 46 counties in South Carolina, eight counties never checked out materials from the library. As expected urban areas and counties near the library's physical location have high usage totals. The visual representation of the data made identification of low usage regions easier than using a standalone database with no visual-spatial component. The low usage counties will be the focus of future Center for Disability Resources Library marketing efforts. Due to the impressive visual-spatial representations created with Geographic Information Systems, which more efficiently communicate information than stand-alone database information can, librarians may benefit from the software's use as a supplemental tool for tracking library usage and planning promotional efforts.

  6. What Is Library "Use"? Facets of Concept and a Typology of Its Application in the Literature of Library and Information Science

    ERIC Educational Resources Information Center

    Fleming-May, Rachel A.

    2011-01-01

    The "use" of library resources and services is frequently presented in library and information science (LIS) literature as a primitive concept: an idea that need not be defined when it is being measured as an operational variable in empirical research. This project considered representations of library "use" through the…

  7. MT71x: Multi-Temperature Library Based on ENDF/B-VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conlin, Jeremy Lloyd; Parsons, Donald Kent; Gray, Mark Girard

    The Nuclear Data Team has released a multitemperature transport library, MT71x, based upon ENDF/B-VII.1 with a few modifications as well as additional evaluations for a total of 427 isotope tables. The library was processed using NJOY2012.39 into 23 temperatures. MT71x consists of two sub-libraries; MT71xMG for multigroup energy representation data and MT71xCE for continuous energy representation data. These sub-libraries are suitable for deterministic transport and Monte Carlo transport applications, respectively. The SZAs used are the same for the two sub-libraries; that is, the same SZA can be used for both libraries. This makes comparisons between the two libraries and betweenmore » deterministic and Monte Carlo codes straightforward. Both the multigroup energy and continuous energy libraries were verified and validated with our checking codes checkmg and checkace (multigroup and continuous energy, respectively) Then an expanded suite of tests was used for additional verification and, finally, verified using an extensive suite of critical benchmark models. We feel that this library is suitable for all calculations and is particularly useful for calculations sensitive to temperature effects.« less

  8. [Physiology in the mirror of systematic catalogue of Russian Academy of Sciences Library].

    PubMed

    Orlov, I V; Lazurkina, V B

    2011-07-01

    Representation of general human and animal physiology publications in the systematic catalogue of the Library of the Russian Academy of Sciences is considered. The organization of the catalogue as applied to the problems of physiology, built on the basis of library-bibliographic classification used in the Russian universal scientific libraries is described. The card files of the systematic catalogue of the Library contain about 8 million cards. Topics that reflect the problems of general physiology contain 39 headings. For the full range of sciences including physiology the tables of general types of divisions were developed. They have been marked by indexes using lower-case letters of the Russian alphabet. For further detalizations of these indexes decimal symbols are used. The indexes are attached directly to the field of knowledge index. With the current relatively easy availability of network resources value and relevance of any catalogue are reduced. However it concerns much more journal articles, rather than reference books, proceedings of various conferences, bibliographies, personalities, and especially the monographs contained in the systematic catalogue. The card systematic catalogue of the Library remains an important source of information on general physiology issues, as well as its magistral narrower sections.

  9. Representation-based user interfaces for the audiovisual library of the year 2000

    NASA Astrophysics Data System (ADS)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  10. LBMD : a layer-based mesh data structure tailored for generic API infrastructures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebeida, Mohamed S.; Knupp, Patrick Michael

    2010-11-01

    A new mesh data structure is introduced for the purpose of mesh processing in Application Programming Interface (API) infrastructures. This data structure utilizes a reduced mesh representation to increase its ability to handle significantly larger meshes compared to full mesh representation. In spite of the reduced representation, each mesh entity (vertex, edge, face, and region) is represented using a unique handle, with no extra storage cost, which is a crucial requirement in most API libraries. The concept of mesh layers makes the data structure more flexible for mesh generation and mesh modification operations. This flexibility can have a favorable impactmore » in solver based queries of finite volume and multigrid methods. The capabilities of LBMD make it even more attractive for parallel implementations using Message Passing Interface (MPI) or Graphics Processing Units (GPUs). The data structure is associated with a new classification method to relate mesh entities to their corresponding geometrical entities. The classification technique stores the related information at the node level without introducing any ambiguities. Several examples are presented to illustrate the strength of this new data structure.« less

  11. Authorship in "College & Research Libraries" Revisited: Gender, Institutional Affiliation, Collaboration.

    ERIC Educational Resources Information Center

    Terry, James L.

    1996-01-01

    Updates earlier studies on the characteristics of authorship of articles published in "College & Research Libraries", focusing on gender, institutional affiliation, and extent of collaboration. Results show representation by academic librarians and authors affiliated with library schools increased, collaboration predominated, and…

  12. Virtual Ligand Screening Using PL-PatchSurfer2, a Molecular Surface-Based Protein-Ligand Docking Method.

    PubMed

    Shin, Woong-Hee; Kihara, Daisuke

    2018-01-01

    Virtual screening is a computational technique for predicting a potent binding compound for a receptor protein from a ligand library. It has been a widely used in the drug discovery field to reduce the efforts of medicinal chemists to find hit compounds by experiments.Here, we introduce our novel structure-based virtual screening program, PL-PatchSurfer, which uses molecular surface representation with the three-dimensional Zernike descriptors, which is an effective mathematical representation for identifying physicochemical complementarities between local surfaces of a target protein and a ligand. The advantage of the surface-patch description is its tolerance on a receptor and compound structure variation. PL-PatchSurfer2 achieves higher accuracy on apo form and computationally modeled receptor structures than conventional structure-based virtual screening programs. Thus, PL-PatchSurfer2 opens up an opportunity for targets that do not have their crystal structures. The program is provided as a stand-alone program at http://kiharalab.org/plps2 . We also provide files for two ligand libraries, ChEMBL and ZINC Drug-like.

  13. A Use of Space: The Unintended Messages of Academic Library Web Sites

    ERIC Educational Resources Information Center

    Kasperek, Sheila; Dorney, Erin; Williams, Beth; O'Brien, Michael

    2011-01-01

    Academic library home pages are not only access points to the resources and services of a library, they are virtual representations of the library itself. The content placed on the page, where it is placed, and the amount of space allotted are all choices that send a message about the character of the library, the resources a user should start…

  14. Picture This... Developing Standards for Electronic Images at the National Library of Medicine

    PubMed Central

    Masys, Daniel R.

    1990-01-01

    New computer technologies have made it feasible to represent, store, and communicate high resolution biomedical images via electronic means. Traditional two dimensional medical images such as those on printed pages have been supplemented by three dimensional images which can be rendered, rotated, and “dissected” from any point of view. The library of the future will provide electronic access not only to words and numbers, but to pictures, sounds, and other nontextual information. There currently exist few widely-accepted standards for the representation and communication of complex images, yet such standards will be critical to the feasibility and usefulness of digital image collections in the life sciences. The National Library of Medicine is embarked on a project to develop a complete digital volumetric representation of an adult human male and female. This “Visible Human Project” will address the issue of standards for computer representation of biological structure.

  15. Using graph approach for managing connectivity in integrative landscape modelling

    NASA Astrophysics Data System (ADS)

    Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger

    2013-04-01

    In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). OpenFLUID-landr library has been developed in order i) to be used with no GIS expert skills needed (common gis formats can be read and simplified spatial management is provided), ii) to easily develop adapted rules of landscape discretization and graph creation to follow spatialized model requirements and iii) to allow model developers to manage dynamic and complex spatial topology. Graph management in OpenFLUID are shown with i) examples of hydrological modelizations on complex farmed landscapes and ii) the new implementation of Geo-MHYDAS tool based on the OpenFLUID-landr library, which allows to discretize a landscape and create graph structure for the MHYDAS model requirements.

  16. Library Services in Institutions for Mentally and Developmentally Disabled Adults.

    ERIC Educational Resources Information Center

    Ensor, Pat

    To improve the quality of life of institutionalized individuals, libraries can serve as a constructive escape mechanism for dealing with stress, a representation of external reality, and a therapeutic agent, in addition to offering bibliotherapy. Ideally, the library should be an integral part of the institution and provide a user-appropriate…

  17. ToxCast Chemical Landscape: Paving the Road to 21st Century Toxicology.

    PubMed

    Richard, Ann M; Judson, Richard S; Houck, Keith A; Grulke, Christopher M; Volarath, Patra; Thillainadarajah, Inthirany; Yang, Chihae; Rathman, James; Martin, Matthew T; Wambaugh, John F; Knudsen, Thomas B; Kancherla, Jayaram; Mansouri, Kamel; Patlewicz, Grace; Williams, Antony J; Little, Stephen B; Crofton, Kevin M; Thomas, Russell S

    2016-08-15

    The U.S. Environmental Protection Agency's (EPA) ToxCast program is testing a large library of Agency-relevant chemicals using in vitro high-throughput screening (HTS) approaches to support the development of improved toxicity prediction models. Launched in 2007, Phase I of the program screened 310 chemicals, mostly pesticides, across hundreds of ToxCast assay end points. In Phase II, the ToxCast library was expanded to 1878 chemicals, culminating in the public release of screening data at the end of 2013. Subsequent expansion in Phase III has resulted in more than 3800 chemicals actively undergoing ToxCast screening, 96% of which are also being screened in the multi-Agency Tox21 project. The chemical library unpinning these efforts plays a central role in defining the scope and potential application of ToxCast HTS results. The history of the phased construction of EPA's ToxCast library is reviewed, followed by a survey of the library contents from several different vantage points. CAS Registry Numbers are used to assess ToxCast library coverage of important toxicity, regulatory, and exposure inventories. Structure-based representations of ToxCast chemicals are then used to compute physicochemical properties, substructural features, and structural alerts for toxicity and biotransformation. Cheminformatics approaches using these varied representations are applied to defining the boundaries of HTS testability, evaluating chemical diversity, and comparing the ToxCast library to potential target application inventories, such as used in EPA's Endocrine Disruption Screening Program (EDSP). Through several examples, the ToxCast chemical library is demonstrated to provide comprehensive coverage of the knowledge domains and target inventories of potential interest to EPA. Furthermore, the varied representations and approaches presented here define local chemistry domains potentially worthy of further investigation (e.g., not currently covered in the testing library or defined by toxicity "alerts") to strategically support data mining and predictive toxicology modeling moving forward.

  18. Bacterial Artificial Chromosome Libraries for Mouse Sequencing and Functional Analysis

    PubMed Central

    Osoegawa, Kazutoyo; Tateno, Minako; Woon, Peng Yeong; Frengen, Eirik; Mammoser, Aaron G.; Catanese, Joseph J.; Hayashizaki, Yoshihide; de Jong, Pieter J.

    2000-01-01

    Bacterial artificial chromosome (BAC) and P1-derived artificial chromosome (PAC) libraries providing a combined 33-fold representation of the murine genome have been constructed using two different restriction enzymes for genomic digestion. A large-insert PAC library was prepared from the 129S6/SvEvTac strain in a bacterial/mammalian shuttle vector to facilitate functional gene studies. For genome mapping and sequencing, we prepared BAC libraries from the 129S6/SvEvTac and the C57BL/6J strains. The average insert sizes for the three libraries range between 130 kb and 200 kb. Based on the numbers of clones and the observed average insert sizes, we estimate each library to have slightly in excess of 10-fold genome representation. The average number of clones found after hybridization screening with 28 probes was in the range of 9–14 clones per marker. To explore the fidelity of the genomic representation in the three libraries, we analyzed three contigs, each established after screening with a single unique marker. New markers were established from the end sequences and screened against all the contig members to determine if any of the BACs and PACs are chimeric or rearranged. Only one chimeric clone and six potential deletions have been observed after extensive analysis of 113 PAC and BAC clones. Seventy-one of the 113 clones were conclusively nonchimeric because both end markers or sequences were mapped to the other confirmed contig members. We could not exclude chimerism for the remaining 41 clones because one or both of the insert termini did not contain unique sequence to design markers. The low rate of chimerism, ∼1%, and the low level of detected rearrangements support the anticipated usefulness of the BAC libraries for genome research. [The sequence data described in this paper have been submitted to the GenBank data library under accession numbers AQ797173–AQ797398.] PMID:10645956

  19. The Victor C++ library for protein representation and advanced manipulation.

    PubMed

    Hirsh, Layla; Piovesan, Damiano; Giollo, Manuel; Ferrari, Carlo; Tosatto, Silvio C E

    2015-04-01

    Protein sequence and structure representation and manipulation require dedicated software libraries to support methods of increasing complexity. Here, we describe the VIrtual Constrution TOol for pRoteins (Victor) C++ library, an open source platform dedicated to enabling inexperienced users to develop advanced tools and gathering contributions from the community. The provided application examples cover statistical energy potentials, profile-profile sequence alignments and ab initio loop modeling. Victor was used over the last 15 years in several publications and optimized for efficiency. It is provided as a GitHub repository with source files and unit tests, plus extensive online documentation, including a Wiki with help files and tutorials, examples and Doxygen documentation. The C++ library and online documentation, distributed under a GPL license are available from URL: http://protein.bio.unipd.it/victor/. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Subject and Citation Indexing. Part I: The Clustering Structure of Composite Representations in the Cystic Fibrosis Document Collection. Part II: The Optimal, Cluster-Based Retrieval Performance of Composite Representations.

    ERIC Educational Resources Information Center

    Shaw, W. M., Jr.

    1991-01-01

    Two articles discuss the clustering of composite representations in the Cystic Fibrosis Document Collection from the National Library of Medicine's MEDLINE file. Clustering is evaluated as a function of the exhaustivity of composite representations based on Medical Subject Headings (MeSH) and citation indexes, and evaluation of retrieval…

  1. What Makes the Digital "Special"? The Research Program in Digital Collections at the National Library of Wales

    ERIC Educational Resources Information Center

    Cusworth, Andrew; Hughes, Lorna M.; James, Rhian; Roberts, Owain; Roderick, Gareth Lloyd

    2015-01-01

    This article introduces some of the digital projects currently in development at the National Library of Wales as part of its Research Program in Digital Collections. These projects include the digital representation of the Library's Kyffin Willams art collection, musical collections, and probate collection, and of materials collected by the…

  2. Development of US EPA's Ecological Production Function Library

    EPA Science Inventory

    US EPA is developing a library of ecological production functions (EPFs) to help communities plan for sustainable access to ecosystem goods and services (EGS). Several databases already compile information about the value of EGS. However, they focus on static representations of...

  3. Plans and progress for building a Great Lakes fauna DNA barcode reference library

    EPA Science Inventory

    DNA reference libraries provide researchers with an important tool for assessing regional biodiversity by allowing unknown genetic sequences to be assigned identities, while also providing a means for taxonomists to validate identifications. Expanding the representation of Great...

  4. The Digital electronic Guideline Library (DeGeL): a hybrid framework for representation and use of clinical guidelines.

    PubMed

    Shahar, Yuval; Young, Ohad; Shalom, Erez; Mayaffit, Alon; Moskovitch, Robert; Hessing, Alon; Galperin, Maya

    2004-01-01

    We propose to present a poster (and potentially also a demonstration of the implemented system) summarizing the current state of our work on a hybrid, multiple-format representation of clinical guidelines that facilitates conversion of guidelines from free text to a formal representation. We describe a distributed Web-based architecture (DeGeL) and a set of tools using the hybrid representation. The tools enable performing tasks such as guideline specification, semantic markup, search, retrieval, visualization, eligibility determination, runtime application and retrospective quality assessment. The representation includes four parallel formats: Free text (one or more original sources); semistructured text (labeled by the target guideline-ontology semantic labels); semiformal text (which includes some control specification); and a formal, machine-executable representation. The specification, indexing, search, retrieval, and browsing tools are essentially independent of the ontology chosen for guideline representation, but editing the semi-formal and formal formats requires ontology-specific tools, which we have developed in the case of the Asbru guideline-specification language. The four formats support increasingly sophisticated computational tasks. The hybrid guidelines are stored in a Web-based library. All tools, such as for runtime guideline application or retrospective quality assessment, are designed to operate on all representations. We demonstrate the hybrid framework by providing examples from the semantic markup and search tools.

  5. Performances of Different Fragment Sizes for Reduced Representation Bisulfite Sequencing in Pigs.

    PubMed

    Yuan, Xiao-Long; Zhang, Zhe; Pan, Rong-Yang; Gao, Ning; Deng, Xi; Li, Bin; Zhang, Hao; Sangild, Per Torp; Li, Jia-Qi

    2017-01-01

    Reduced representation bisulfite sequencing (RRBS) has been widely used to profile genome-scale DNA methylation in mammalian genomes. However, the applications and technical performances of RRBS with different fragment sizes have not been systematically reported in pigs, which serve as one of the important biomedical models for humans. The aims of this study were to evaluate capacities of RRBS libraries with different fragment sizes to characterize the porcine genome. We found that the Msp I-digested segments between 40 and 220 bp harbored a high distribution peak at 74 bp, which were highly overlapped with the repetitive elements and might reduce the unique mapping alignment. The RRBS library of 110-220 bp fragment size had the highest unique mapping alignment and the lowest multiple alignment. The cost-effectiveness of the 40-110 bp, 110-220 bp and 40-220 bp fragment sizes might decrease when the dataset size was more than 70, 50 and 110 million reads for these three fragment sizes, respectively. Given a 50-million dataset size, the average sequencing depth of the detected CpG sites in the 110-220 bp fragment size appeared to be deeper than in the 40-110 bp and 40-220 bp fragment sizes, and these detected CpG sties differently located in gene- and CpG island-related regions. In this study, our results demonstrated that selections of fragment sizes could affect the numbers and sequencing depth of detected CpG sites as well as the cost-efficiency. No single solution of RRBS is optimal in all circumstances for investigating genome-scale DNA methylation. This work provides the useful knowledge on designing and executing RRBS for investigating the genome-wide DNA methylation in tissues from pigs.

  6. Digital Libraries: The Next Generation in File System Technology.

    ERIC Educational Resources Information Center

    Bowman, Mic; Camargo, Bill

    1998-01-01

    Examines file sharing within corporations that use wide-area, distributed file systems. Applications and user interactions strongly suggest that the addition of services typically associated with digital libraries (content-based file location, strongly typed objects, representation of complex relationships between documents, and extrinsic…

  7. On Being a Client: What Every Library Director Should Know about Lawyers.

    ERIC Educational Resources Information Center

    Peat, W. Leslie

    1981-01-01

    Argues that the establishment of a solid working relationship with a competent lawyer is a regular part of the business of running a library, and provides practical advice on lawyer selection, fee arrangements, and the ground rules of legal representation. (RAA)

  8. Nuclear Data Online Services at Peking University

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, T.S.; Guo, Z.Y.; Ye, W.G.

    2005-05-24

    The Institute of Heavy Ion Physics at Peking University has developed a new nuclear data online services software package. Through the web site (http://ndos.nst.pku.edu.cn), it offers online access to main relational nuclear databases: five evaluated neutron libraries (BROND, CENDL, ENDF, JEF, JENDL), the ENSDF library, the EXFOR library, the IAEA photonuclear library and the charged particle data of the FENDL library. This software allows the comparison and graphic representations of the different data sets. The computer programs of this package are based on the Linux implementation of PHP and the MySQL software.

  9. Nuclear Data Online Services at Peking University

    NASA Astrophysics Data System (ADS)

    Fan, T. S.; Guo, Z. Y.; Ye, W. G.; Liu, W. L.; Liu, T. J.; Liu, C. X.; Chen, J. X.; Tang, G. Y.; Shi, Z. M.; Huang, X. L.; Chen, J. E.

    2005-05-01

    The Institute of Heavy Ion Physics at Peking University has developed a new nuclear data online services software package. Through the web site (http://ndos.nst.pku.edu.cn), it offers online access to main relational nuclear databases: five evaluated neutron libraries (BROND, CENDL, ENDF, JEF, JENDL), the ENSDF library, the EXFOR library, the IAEA photonuclear library and the charged particle data of the FENDL library. This software allows the comparison and graphic representations of the different data sets. The computer programs of this package are based on the Linux implementation of PHP and the MySQL software.

  10. Universal Design for Learning and School Libraries: A Logical Partnership

    ERIC Educational Resources Information Center

    Robinson, David E.

    2017-01-01

    This article will explore the basic tenets of Universal Design for Learning (UDL) in relation to collaborative curriculum development and implementation; provide a case study examination of UDL principles in action; and suggest school library curricular activities that provide opportunities for multiple means of representation, action, and…

  11. Icons as Visual Forum of Knowledge Representation on the World Wide Web: A Semiotic Analysis.

    ERIC Educational Resources Information Center

    Ma, Yan; Diodato, Virgil

    1999-01-01

    Compares the indexing structure of icons with principles used for traditional indexing. A sample of 15 library homepages was drawn from the total population of the United States library homepages. Semiotics theory was used to study the icons. Analysis and results are outlined. (AEF)

  12. DNA methylation assessment from human slow- and fast-twitch skeletal muscle fibers

    PubMed Central

    Begue, Gwénaëlle; Raue, Ulrika; Jemiolo, Bozena

    2017-01-01

    A new application of the reduced representation bisulfite sequencing method was developed using low-DNA input to investigate the epigenetic profile of human slow- and fast-twitch skeletal muscle fibers. Successful library construction was completed with as little as 15 ng of DNA, and high-quality sequencing data were obtained with 32 ng of DNA. Analysis identified 143,160 differentially methylated CpG sites across 14,046 genes. In both fiber types, selected genes predominantly expressed in slow or fast fibers were hypomethylated, which was supported by the RNA-sequencing analysis. These are the first fiber type-specific methylation data from human skeletal muscle and provide a unique platform for future research. NEW & NOTEWORTHY This study validates a low-DNA input reduced representation bisulfite sequencing method for human muscle biopsy samples to investigate the methylation patterns at a fiber type-specific level. These are the first fiber type-specific methylation data reported from human skeletal muscle and thus provide initial insight into basal state differences in myosin heavy chain I and IIa muscle fibers among young, healthy men. PMID:28057818

  13. Capturing the 'ome': the expanding molecular toolbox for RNA and DNA library construction.

    PubMed

    Boone, Morgane; De Koker, Andries; Callewaert, Nico

    2018-04-06

    All sequencing experiments and most functional genomics screens rely on the generation of libraries to comprehensively capture pools of targeted sequences. In the past decade especially, driven by the progress in the field of massively parallel sequencing, numerous studies have comprehensively assessed the impact of particular manipulations on library complexity and quality, and characterized the activities and specificities of several key enzymes used in library construction. Fortunately, careful protocol design and reagent choice can substantially mitigate many of these biases, and enable reliable representation of sequences in libraries. This review aims to guide the reader through the vast expanse of literature on the subject to promote informed library generation, independent of the application.

  14. Archetypal Analysis for Sparse Representation-Based Hyperspectral Sub-Pixel Quantification

    NASA Astrophysics Data System (ADS)

    Drees, L.; Roscher, R.

    2017-05-01

    This paper focuses on the quantification of land cover fractions in an urban area of Berlin, Germany, using simulated hyperspectral EnMAP data with a spatial resolution of 30m×30m. For this, sparse representation is applied, where each pixel with unknown surface characteristics is expressed by a weighted linear combination of elementary spectra with known land cover class. The elementary spectra are determined from image reference data using simplex volume maximization, which is a fast heuristic technique for archetypal analysis. In the experiments, the estimation of class fractions based on the archetypal spectral library is compared to the estimation obtained by a manually designed spectral library by means of reconstruction error, mean absolute error of the fraction estimates, sum of fractions and the number of used elementary spectra. We will show, that a collection of archetypes can be an adequate and efficient alternative to the spectral library with respect to mentioned criteria.

  15. A streamlined method for analysing genome-wide DNA methylation patterns from low amounts of FFPE DNA.

    PubMed

    Ludgate, Jackie L; Wright, James; Stockwell, Peter A; Morison, Ian M; Eccles, Michael R; Chatterjee, Aniruddha

    2017-08-31

    Formalin fixed paraffin embedded (FFPE) tumor samples are a major source of DNA from patients in cancer research. However, FFPE is a challenging material to work with due to macromolecular fragmentation and nucleic acid crosslinking. FFPE tissue particularly possesses challenges for methylation analysis and for preparing sequencing-based libraries relying on bisulfite conversion. Successful bisulfite conversion is a key requirement for sequencing-based methylation analysis. Here we describe a complete and streamlined workflow for preparing next generation sequencing libraries for methylation analysis from FFPE tissues. This includes, counting cells from FFPE blocks and extracting DNA from FFPE slides, testing bisulfite conversion efficiency with a polymerase chain reaction (PCR) based test, preparing reduced representation bisulfite sequencing libraries and massively parallel sequencing. The main features and advantages of this protocol are: An optimized method for extracting good quality DNA from FFPE tissues. An efficient bisulfite conversion and next generation sequencing library preparation protocol that uses 50 ng DNA from FFPE tissue. Incorporation of a PCR-based test to assess bisulfite conversion efficiency prior to sequencing. We provide a complete workflow and an integrated protocol for performing DNA methylation analysis at the genome-scale and we believe this will facilitate clinical epigenetic research that involves the use of FFPE tissue.

  16. Journey Mapping the User Experience

    ERIC Educational Resources Information Center

    Samson, Sue; Granath, Kim; Alger, Adrienne

    2017-01-01

    This journey-mapping pilot study was designed to determine whether journey mapping is an effective method to enhance the student experience of using the library by assessing our services from their point of view. Journey mapping plots a process or service to produce a visual representation of a library transaction--from the point at which the…

  17. An Integrated System for Managing the Andalusian Parliament's Digital Library

    ERIC Educational Resources Information Center

    de Campos, Luis M.; Fernandez-Luna, Juan M.; Huete, Juan F.; Martin-Dancausa, Carlos J.; Tagua-Jimenez, Antonio; Tur-Vigil, Carmen

    2009-01-01

    Purpose: The purpose of this paper is to present an overview of the reorganisation of the Andalusian Parliament's digital library to improve the electronic representation and access of its official corpus by taking advantage of a document's internal organisation. Video recordings of the parliamentary sessions have also been integrated with their…

  18. "Bloodline Is All I Need": Defiant Indigeneity and Hawaiian Hip-Hop

    ERIC Educational Resources Information Center

    Teves, Stephanie Nohelani

    2011-01-01

    During the late twentieth century, Kanaka Maoli have struggled to push back against these representations, offering a rewriting of Hawaiian history, quite literally. Infused by Hawaiian nationalism and a growing library of works that investigate the naturalization of American colonialism in Hawai'i, innovative Kanaka Maoli representations in the…

  19. Increased Diversity of Libraries from Libraries: Chemoinformatic Analysis of Bis-Diazacyclic Libraries

    PubMed Central

    López-Vallejo, Fabian; Nefzi, Adel; Bender, Andreas; Owen, John R.; Nabney, Ian T.; Houghten, Richard A.; Medina-Franco, Jose L.

    2011-01-01

    Combinatorial libraries continue to play a key role in drug discovery. To increase structural diversity, several experimental methods have been developed. However, limited efforts have been performed so far to quantify the diversity of the broadly used diversity-oriented synthetic (DOS) libraries. Herein we report a comprehensive characterization of 15 bis-diazacyclic combinatorial libraries obtained through libraries from libraries, which is a DOS approach. Using MACCS keys, radial and different pharmacophoric fingerprints as well as six molecular properties, it was demonstrated the increased structural and property diversity of the libraries from libraries over the individual libraries. Comparison of the libraries to existing drugs, NCI Diversity and the Molecular Libraries Small Molecule Repository revealed the structural uniqueness of the combinatorial libraries (mean similarity < 0.5 for any fingerprint representation). In particular, bis-cyclic thiourea libraries were the most structurally dissimilar to drugs retaining drug-like character in property space. This study represents the first comprehensive quantification of the diversity of libraries from libraries providing a solid quantitative approach to compare and contrast the diversity of DOS libraries with existing drugs or any other compound collection. PMID:21294850

  20. A strategy for rapid production and screening of yeast artificial chromosome libraries.

    PubMed

    Strauss, W M; Jaenisch, E; Jaenisch, R

    1992-01-01

    We describe methods for rapid production and screening of yeast artificial chromosome (YAC) libraries. Utilizing complete restriction digests of mouse genomic DNA for ligations in agarose, a 32,000-clone library was produced and screened in seven weeks. Screening was accomplished by subdividing primary transformation plates into pools of approximately 100 clones which were transferred into a master glycerol stock. These master stocks were used to inoculate liquid cultures to produce culture "pools," and ten pools of 100 clones were then combined to yield superpools of 1,000 clones. Both pool and superpool DNA was screened by polymerase chain reaction (PCR) and positive pools representing 100 clones were then plated on selective medium and screened by in situ hybridization. Screening by the two tiered PCR assay and by in situ hybridization was completed in 4-5 days. Utilizing this methodology we have isolated a 150 kb clone spanning the alpha 1(I) collagen (Col1a1) gene as well as 40 kb clones from the Hox-2 locus. To characterize the representation of the YAC library, the size distribution of genomic Sal I fragments was compared to that of clones picked at random from the library. The results demonstrate significant biasing of the cloned fragment distribution, resulting in a loss of representation for larger fragments.

  1. The Development of Digital Resources by Library and Information Professionals and Historians: Two Case Studies from Northern Ireland

    ERIC Educational Resources Information Center

    White, Andy

    2005-01-01

    Purpose: This paper aims to use two case studies of digital archives designed by library and information professionals and historians to highlight the twin issues of academic authenticity and accuracy of digital representations. Design/methodology/approach: Using secondary literature, the author established a hypothesis about the way in which…

  2. An efficient and numerically stable procedure for generating sextic force fields in normal mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibaev, M.; Crittenden, D. L., E-mail: deborah.crittenden@canterbury.ac.nz

    In this paper, we outline a general, scalable, and black-box approach for calculating high-order strongly coupled force fields in rectilinear normal mode coordinates, based upon constructing low order expansions in curvilinear coordinates with naturally limited mode-mode coupling, and then transforming between coordinate sets analytically. The optimal balance between accuracy and efficiency is achieved by transforming from 3 mode representation quartic force fields in curvilinear normal mode coordinates to 4 mode representation sextic force fields in rectilinear normal modes. Using this reduced mode-representation strategy introduces an error of only 1 cm{sup −1} in fundamental frequencies, on average, across a sizable testmore » set of molecules. We demonstrate that if it is feasible to generate an initial semi-quartic force field in curvilinear normal mode coordinates from ab initio data, then the subsequent coordinate transformation procedure will be relatively fast with modest memory demands. This procedure facilitates solving the nuclear vibrational problem, as all required integrals can be evaluated analytically. Our coordinate transformation code is implemented within the extensible PyPES library program package, at http://sourceforge.net/projects/pypes-lib-ext/.« less

  3. Subject Indexing and Citation Indexing--Part I: Clustering Structure in the Cystic Fibrosis Document Collection [and] Part II: An Evaluation and Comparison.

    ERIC Educational Resources Information Center

    Shaw, W. M., Jr.

    1990-01-01

    These two articles discuss clustering structure in the Cystic Fibrosis Document Collection, which is derived from the National Library of Medicine's MEDLINE file. The exhaustivity of four subject representations and two citation representations is examined, and descriptor-weight thresholds and similarity thresholds are used to compute…

  4. Representations of Technology in the "Technical Stories" for Children of Otto Witt, Early 20th Century Swedish Technology Educator

    ERIC Educational Resources Information Center

    Axell, Cecilia; Hallström, Jonas

    2013-01-01

    Children's fiction in school libraries have played and still play a role in mediating representations of technology and attitudes towards technology to schoolchildren. In early 20th century Sweden, elementary education, including textbooks and literature that were used in teaching, accounted for the main mediation of technological knowledge to…

  5. On the suitability of different representations of solid catalysts for combinatorial library design by genetic algorithms.

    PubMed

    Gobin, Oliver C; Schüth, Ferdi

    2008-01-01

    Genetic algorithms are widely used to solve and optimize combinatorial problems and are more often applied for library design in combinatorial chemistry. Because of their flexibility, however, their implementation can be challenging. In this study, the influence of the representation of solid catalysts on the performance of genetic algorithms was systematically investigated on the basis of a new, constrained, multiobjective, combinatorial test problem with properties common to problems in combinatorial materials science. Constraints were satisfied by penalty functions, repair algorithms, or special representations. The tests were performed using three state-of-the-art evolutionary multiobjective algorithms by performing 100 optimization runs for each algorithm and test case. Experimental data obtained during the optimization of a noble metal-free solid catalyst system active in the selective catalytic reduction of nitric oxide with propene was used to build up a predictive model to validate the results of the theoretical test problem. A significant influence of the representation on the optimization performance was observed. Binary encodings were found to be the preferred encoding in most of the cases, and depending on the experimental test unit, repair algorithms or penalty functions performed best.

  6. Converting point-wise nuclear cross sections to pole representation using regularized vector fitting

    NASA Astrophysics Data System (ADS)

    Peng, Xingjie; Ducru, Pablo; Liu, Shichang; Forget, Benoit; Liang, Jingang; Smith, Kord

    2018-03-01

    Direct Doppler broadening of nuclear cross sections in Monte Carlo codes has been widely sought for coupled reactor simulations. One recent approach proposed analytical broadening using a pole representation of the commonly used resonance models and the introduction of a local windowing scheme to improve performance (Hwang, 1987; Forget et al., 2014; Josey et al., 2015, 2016). This pole representation has been achieved in the past by converting resonance parameters in the evaluation nuclear data library into poles and residues. However, cross sections of some isotopes are only provided as point-wise data in ENDF/B-VII.1 library. To convert these isotopes to pole representation, a recent approach has been proposed using the relaxed vector fitting (RVF) algorithm (Gustavsen and Semlyen, 1999; Gustavsen, 2006; Liu et al., 2018). This approach however needs to specify ahead of time the number of poles. This article addresses this issue by adding a poles and residues filtering step to the RVF procedure. This regularized VF (ReV-Fit) algorithm is shown to efficiently converge the poles close to the physical ones, eliminating most of the superfluous poles, and thus enabling the conversion of point-wise nuclear cross sections.

  7. Optimizing ROOT’s Performance Using C++ Modules

    NASA Astrophysics Data System (ADS)

    Vassilev, Vassil

    2017-10-01

    ROOT comes with a C++ compliant interpreter cling. Cling needs to understand the content of the libraries in order to interact with them. Exposing the full shared library descriptors to the interpreter at runtime translates into increased memory footprint. ROOT’s exploratory programming concepts allow implicit and explicit runtime shared library loading. It requires the interpreter to load the library descriptor. Re-parsing of descriptors’ content has a noticeable effect on the runtime performance. Present state-of-art lazy parsing technique brings the runtime performance to reasonable levels but proves to be fragile and can introduce correctness issues. An elegant solution is to load information from the descriptor lazily and in a non-recursive way. The LLVM community advances its C++ Modules technology providing an io-efficient, on-disk representation capable to reduce build times and peak memory usage. The feature is standardized as a C++ technical specification. C++ Modules are a flexible concept, which can be employed to match CMS and other experiments’ requirement for ROOT: to optimize both runtime memory usage and performance. Cling technically “inherits” the feature, however tweaking it to ROOT scale and beyond is a complex endeavor. The paper discusses the status of the C++ Modules in the context of ROOT, supported by few preliminary performance results. It shows a step-by-step migration plan and describes potential challenges which could appear.

  8. AstroVis: Visualizing astronomical data cubes

    NASA Astrophysics Data System (ADS)

    Finniss, Stephen; Tyler, Robin; Questiaux, Jacques

    2016-08-01

    AstroVis enables rapid visualization of large data files on platforms supporting the OpenGL rendering library. Radio astronomical observations are typically three dimensional and stored as data cubes. AstroVis implements a scalable approach to accessing these files using three components: a File Access Component (FAC) that reduces the impact of reading time, which speeds up access to the data; the Image Processing Component (IPC), which breaks up the data cube into smaller pieces that can be processed locally and gives a representation of the whole file; and Data Visualization, which implements an approach of Overview + Detail to reduces the dimensions of the data being worked with and the amount of memory required to store it. The result is a 3D display paired with a 2D detail display that contains a small subsection of the original file in full resolution without reducing the data in any way.

  9. Low abundance of microsatellite repeats in the genome of the Brown-headed Cowbird (Molothrus ater)

    USGS Publications Warehouse

    Longmire, Jonathan L.; Hahn, D.C.; Roach, J.L.

    1999-01-01

    A cosmid library made from brown-headed cowbird (Molothrus ater) DNA was examined for representation of 17 distinct microsatellite motifs including all possible mono-, di-, and trinucleotide microsatellites, and the tetranucleotide repeat (GATA)n. The overall density of microsatellites within cowbird DNA was found to be one repeat per 89 kb and the frequency of the most abundant motif, (AGC)n, was once every 382 kb. The abundance of microsatellites within the cowbird genome is estimated to be reduced approximately 15-fold compared to humans. The reduced frequency of microsatellites seen in this study is consistent with previous observations indicating reduced numbers of microsatellites and other interspersed repeats in avian DNA. In addition to providing new information concerning the abundance of microsatellites within an avian genome, these results provide useful insights for selecting cloning strategies that might be used in the development of locus-specific microsatellite markers for avian studies.

  10. Agreement between State of California and California State Employees' Association Covering Bargaining Unit 3, Education and Library, July 1, 1985 through June 30, 1987.

    ERIC Educational Resources Information Center

    California State Employees' Association, Sacramento.

    The collective bargaining agreement between the State of California and California State Employees' Association (CSEA) Bargaining Unit 3, representing all employees in education and library services, is presented covering the period July 1, 1985 through June 30, 1987. The 23 articles cover the following: recognition; CSEA representation rights;…

  11. Visualizing Subject Access for 21st Century Information Resources. Papers Presented at the Clinic on Library Applications of Data Processing (34th, Urbana, Illinois, March 2-4, 1997).

    ERIC Educational Resources Information Center

    Cochrane, Pauline Atherton, Ed.; Johnson, Eric H., Ed.

    This proceedings represents and documents in part the 16 presentations made at the 34th Annual Clinic on Library Applications of Data Processing. World Wide Web URLs that provide insight into each presentation are included. Presentations include: (1) "Hypostatizing Data Collections, Especially Bibliographic: Abstractions, Representations,…

  12. GRASP/Ada: Graphical Representations of Algorithms, Structures, and Processes for Ada. The development of a program analysis environment for Ada: Reverse engineering tools for Ada, task 2, phase 3

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1991-01-01

    The main objective is the investigation, formulation, and generation of graphical representations of algorithms, structures, and processes for Ada (GRASP/Ada). The presented task, in which various graphical representations that can be extracted or generated from source code are described and categorized, is focused on reverse engineering. The following subject areas are covered: the system model; control structure diagram generator; object oriented design diagram generator; user interface; and the GRASP library.

  13. Error Tolerant Plan Recognition: An Empirical Investigation

    DTIC Science & Technology

    2015-05-01

    structure can differ drastically in semantics. For instance, a plan to travel to a grocery store to buy milk might coincidentally be structurally...algorithm for its ability to tolerate input errors, and that storing and leveraging state information in its plan representation substantially...proposed a novel representation for storing and organizing plans in a plan library, based on action-state pairs and abstract states. It counts the

  14. Graphical Representation of Parallel Algorithmic Processes

    DTIC Science & Technology

    1990-12-01

    interface with the AAARF main process . The source code for the AAARF class-common library is in the common subdi- rectory and consists of the following files... for public release; distribution unlimited AFIT/GCE/ENG/90D-07 Graphical Representation of Parallel Algorithmic Processes THESIS Presented to the...goal of this study is to develop an algorithm animation facility for parallel processes executing on different architectures, from multiprocessor

  15. Turtle: identifying frequent k-mers with cache-efficient algorithms.

    PubMed

    Roy, Rajat Shuvro; Bhattacharya, Debashish; Schliep, Alexander

    2014-07-15

    Counting the frequencies of k-mers in read libraries is often a first step in the analysis of high-throughput sequencing data. Infrequent k-mers are assumed to be a result of sequencing errors. The frequent k-mers constitute a reduced but error-free representation of the experiment, which can inform read error correction or serve as the input to de novo assembly methods. Ideally, the memory requirement for counting should be linear in the number of frequent k-mers and not in the, typically much larger, total number of k-mers in the read library. We present a novel method that balances time, space and accuracy requirements to efficiently extract frequent k-mers even for high-coverage libraries and large genomes such as human. Our method is designed to minimize cache misses in a cache-efficient manner by using a pattern-blocked Bloom filter to remove infrequent k-mers from consideration in combination with a novel sort-and-compact scheme, instead of a hash, for the actual counting. Although this increases theoretical complexity, the savings in cache misses reduce the empirical running times. A variant of method can resort to a counting Bloom filter for even larger savings in memory at the expense of false-negative rates in addition to the false-positive rates common to all Bloom filter-based approaches. A comparison with the state-of-the-art shows reduced memory requirements and running times. The tools are freely available for download at http://bioinformatics.rutgers.edu/Software/Turtle and http://figshare.com/articles/Turtle/791582. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Optimization of a metatranscriptomic approach to study the lignocellulolytic potential of the higher termite gut microbiome.

    PubMed

    Marynowska, Martyna; Goux, Xavier; Sillam-Dussès, David; Rouland-Lefèvre, Corinne; Roisin, Yves; Delfosse, Philippe; Calusinska, Magdalena

    2017-09-01

    Thanks to specific adaptations developed over millions of years, the efficiency of lignin, cellulose and hemicellulose decomposition of higher termite symbiotic system exceeds that of many other lignocellulose utilizing environments. Especially, the examination of its symbiotic microbes should reveal interesting carbohydrate-active enzymes, which are of primary interest for the industry. Previous metatranscriptomic reports (high-throughput mRNA sequencing) highlight the high representation and overexpression of cellulose and hemicelluloses degrading genes in the termite hindgut digestomes, indicating the potential of this technology in search for new enzymes. Nevertheless, several factors associated with the material sampling and library preparation steps make the metatranscriptomic studies of termite gut prokaryotic symbionts challenging. In this study, we first examined the influence of the sampling strategy, including the whole termite gut and luminal fluid, on the diversity and the metatranscriptomic profiles of the higher termite gut symbiotic bacteria. Secondly, we evaluated different commercially available kits combined in two library preparative pipelines for the best bacterial mRNA enrichment strategy. We showed that the sampling strategy did not significantly impact the generated results, both in terms of the representation of the microbes and their transcriptomic profiles. Nevertheless collecting luminal fluid reduces the co-amplification of unwanted RNA species of host origin. Furthermore, for the four studied higher termite species, the library preparative pipeline employing Ribo-Zero Gold rRNA Removal Kit "Epidemiology" in combination with Poly(A) Purist MAG kit resulted in a more efficient rRNA and poly-A-mRNAdepletion (up to 98.44% rRNA removed) than the pipeline utilizing MICROBExpress and MICROBEnrich kits. High correlation of both Ribo-Zero and MICROBExpresse depleted gene expression profiles with total non-depleted RNA-seq data has been shown for all studied samples, indicating no systematic skewing of the studied pipelines. We have extensively evaluated the impact of the sampling strategy and library preparation steps on the metatranscriptomic profiles of the higher termite gut symbiotic bacteria. The presented methodological approach has great potential to enhance metatranscriptomic studies of the higher termite intestinal flora and to unravel novel carbohydrate-active enzymes.

  17. Mining the metagenome of activated biomass of an industrial wastewater treatment plant by a novel method.

    PubMed

    Sharma, Nandita; Tanksale, Himgouri; Kapley, Atya; Purohit, Hemant J

    2012-12-01

    Metagenomic libraries herald the era of magnifying the microbial world, tapping into the vast metabolic potential of uncultivated microbes, and enhancing the rate of discovery of novel genes and pathways. In this paper, we describe a method that facilitates the extraction of metagenomic DNA from activated sludge of an industrial wastewater treatment plant and its use in mining the metagenome via library construction. The efficiency of this method was demonstrated by the large representation of the bacterial genome in the constructed metagenomic libraries and by the functional clones obtained. The BAC library represented 95.6 times the bacterial genome, while, the pUC library represented 41.7 times the bacterial genome. Twelve clones in the BAC library demonstrated lipolytic activity, while four clones demonstrated dioxygenase activity. Four clones in pUC library tested positive for cellulase activity. This method, using FTA cards, not only can be used for library construction, but can also store the metagenome at room temperature.

  18. SPLICER - A GENETIC ALGORITHM TOOL FOR SEARCH AND OPTIMIZATION, VERSION 1.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Wang, L.

    1994-01-01

    SPLICER is a genetic algorithm tool which can be used to solve search and optimization problems. Genetic algorithms are adaptive search procedures (i.e. problem solving methods) based loosely on the processes of natural selection and Darwinian "survival of the fittest." SPLICER provides the underlying framework and structure for building a genetic algorithm application. These algorithms apply genetically-inspired operators to populations of potential solutions in an iterative fashion, creating new populations while searching for an optimal or near-optimal solution to the problem at hand. SPLICER 1.0 was created using a modular architecture that includes a Genetic Algorithm Kernel, interchangeable Representation Libraries, Fitness Modules and User Interface Libraries, and well-defined interfaces between these components. The architecture supports portability, flexibility, and extensibility. SPLICER comes with all source code and several examples. For instance, a "traveling salesperson" example searches for the minimum distance through a number of cities visiting each city only once. Stand-alone SPLICER applications can be used without any programming knowledge. However, to fully utilize SPLICER within new problem domains, familiarity with C language programming is essential. SPLICER's genetic algorithm (GA) kernel was developed independent of representation (i.e. problem encoding), fitness function or user interface type. The GA kernel comprises all functions necessary for the manipulation of populations. These functions include the creation of populations and population members, the iterative population model, fitness scaling, parent selection and sampling, and the generation of population statistics. In addition, miscellaneous functions are included in the kernel (e.g., random number generators). Different problem-encoding schemes and functions are defined and stored in interchangeable representation libraries. This allows the GA kernel to be used with any representation scheme. The SPLICER tool provides representation libraries for binary strings and for permutations. These libraries contain functions for the definition, creation, and decoding of genetic strings, as well as multiple crossover and mutation operators. Furthermore, the SPLICER tool defines the appropriate interfaces to allow users to create new representation libraries. Fitness modules are the only component of the SPLICER system a user will normally need to create or alter to solve a particular problem. Fitness functions are defined and stored in interchangeable fitness modules which must be created using C language. Within a fitness module, a user can create a fitness (or scoring) function, set the initial values for various SPLICER control parameters (e.g., population size), create a function which graphically displays the best solutions as they are found, and provide descriptive information about the problem. The tool comes with several example fitness modules, while the process of developing a fitness module is fully discussed in the accompanying documentation. The user interface is event-driven and provides graphic output in windows. SPLICER is written in Think C for Apple Macintosh computers running System 6.0.3 or later and Sun series workstations running SunOS. The UNIX version is easily ported to other UNIX platforms and requires MIT's X Window System, Version 11 Revision 4 or 5, MIT's Athena Widget Set, and the Xw Widget Set. Example executables and source code are included for each machine version. The standard distribution media for the Macintosh version is a set of three 3.5 inch Macintosh format diskettes. The standard distribution medium for the UNIX version is a .25 inch streaming magnetic tape cartridge in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. SPLICER was developed in 1991.

  19. Proceedings of Military Librarians' Workshop. The User and the Library (11th, Wright-Patterson Air Force Base, October 31-November 2, 1967).

    ERIC Educational Resources Information Center

    Air Force Inst. of Tech., Wright-Patterson AFB, OH.

    The purpose of this workshop was to investigate the problems the user faces in using a military library and to develop some methods of overcoming these problems. To provide balanced representation of both users and librarians, outside speakers contributed ideas that were discussed in group meetings. Each group discussed the same major topics,…

  20. A combination of LongSAGE with Solexa sequencing is well suited to explore the depth and the complexity of transcriptome

    PubMed Central

    Hanriot, Lucie; Keime, Céline; Gay, Nadine; Faure, Claudine; Dossat, Carole; Wincker, Patrick; Scoté-Blachon, Céline; Peyron, Christelle; Gandrillon, Olivier

    2008-01-01

    Background "Open" transcriptome analysis methods allow to study gene expression without a priori knowledge of the transcript sequences. As of now, SAGE (Serial Analysis of Gene Expression), LongSAGE and MPSS (Massively Parallel Signature Sequencing) are the mostly used methods for "open" transcriptome analysis. Both LongSAGE and MPSS rely on the isolation of 21 pb tag sequences from each transcript. In contrast to LongSAGE, the high throughput sequencing method used in MPSS enables the rapid sequencing of very large libraries containing several millions of tags, allowing deep transcriptome analysis. However, a bias in the complexity of the transcriptome representation obtained by MPSS was recently uncovered. Results In order to make a deep analysis of mouse hypothalamus transcriptome avoiding the limitation introduced by MPSS, we combined LongSAGE with the Solexa sequencing technology and obtained a library of more than 11 millions of tags. We then compared it to a LongSAGE library of mouse hypothalamus sequenced with the Sanger method. Conclusion We found that Solexa sequencing technology combined with LongSAGE is perfectly suited for deep transcriptome analysis. In contrast to MPSS, it gives a complex representation of transcriptome as reliable as a LongSAGE library sequenced by the Sanger method. PMID:18796152

  1. pyomocontrib_simplemodel v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William

    2017-03-02

    Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. This library extends the API of Pyomo to include a simple modeling representation: a list of objectives and constraints.

  2. Polio Pictures

    MedlinePlus

    ... dimensional representation of poliovirus. A few examples from public health professionals Child in Nigeria with a leg partly ... for these sites, which offer more images/photos. Public Health Image Library (PHIL) Immunization Action Coalition Polio Eradication ...

  3. Probabilistic representation of gene regulatory networks.

    PubMed

    Mao, Linyong; Resat, Haluk

    2004-09-22

    Recent experiments have established unambiguously that biological systems can have significant cell-to-cell variations in gene expression levels even in isogenic populations. Computational approaches to studying gene expression in cellular systems should capture such biological variations for a more realistic representation. In this paper, we present a new fully probabilistic approach to the modeling of gene regulatory networks that allows for fluctuations in the gene expression levels. The new algorithm uses a very simple representation for the genes, and accounts for the repression or induction of the genes and for the biological variations among isogenic populations simultaneously. Because of its simplicity, introduced algorithm is a very promising approach to model large-scale gene regulatory networks. We have tested the new algorithm on the synthetic gene network library bioengineered recently. The good agreement between the computed and the experimental results for this library of networks, and additional tests, demonstrate that the new algorithm is robust and very successful in explaining the experimental data. The simulation software is available upon request. Supplementary material will be made available on the OUP server.

  4. Construction of CRISPR Libraries for Functional Screening.

    PubMed

    Carstens, Carsten P; Felts, Katherine A; Johns, Sarah E

    2018-01-01

    Identification of gene function has been aided by the ability to generate targeted gene knockouts or transcriptional repression using the CRISPR/CAS9 system. Using pooled libraries of guide RNA expression vectors that direct CAS9 to a specific genomic site allows identification of genes that are either enriched or depleted in response to a selection scheme, thus linking the affected gene to the chosen phenotype. The quality of the data generated by the screening is dependent on the quality of the guide RNA delivery library with regards to error rates and especially evenness of distribution of the guides. Here, we describe a method for constructing complex plasmid libraries based on pooled designed oligomers with high representation and tight distributions. The procedure allows construction of plasmid libraries of >60,000 members with a 95th/5th percentile ratio of less than 3.5.

  5. Development of an ATR Workbench for SAR Imagery

    DTIC Science & Technology

    2002-12-01

    containing a representation of the object. Each image representation contains only a subset of information about t, and that information is often...GUI. In the case of HNeT under Windows, the native COM/ ActiveX automation interface is used. This provides Python with direct access to the many...that can contain other objects such as a menu bar, buttons, an image display area, text box etc. The library also provides an event handling mechanism

  6. [Visual representation of biological structures in teaching material].

    PubMed

    Morato, M A; Struchiner, M; Bordoni, E; Ricciardi, R M

    1998-01-01

    Parameters must be defined for presenting and handling scientific information presented in the form of teaching materials. Through library research and consultations with specialists in the health sciences and in graphic arts and design, this study undertook a comparative description of the first examples of scientific illustrations of anatomy and the evolution of visual representations of knowledge on the cell. The study includes significant examples of illustrations which served as elements of analysis.

  7. A method for high-throughput production of sequence-verified DNA libraries and strain collections.

    PubMed

    Smith, Justin D; Schlecht, Ulrich; Xu, Weihong; Suresh, Sundari; Horecka, Joe; Proctor, Michael J; Aiyar, Raeka S; Bennett, Richard A O; Chu, Angela; Li, Yong Fuga; Roy, Kevin; Davis, Ronald W; Steinmetz, Lars M; Hyman, Richard W; Levy, Sasha F; St Onge, Robert P

    2017-02-13

    The low costs of array-synthesized oligonucleotide libraries are empowering rapid advances in quantitative and synthetic biology. However, high synthesis error rates, uneven representation, and lack of access to individual oligonucleotides limit the true potential of these libraries. We have developed a cost-effective method called Recombinase Directed Indexing (REDI), which involves integration of a complex library into yeast, site-specific recombination to index library DNA, and next-generation sequencing to identify desired clones. We used REDI to generate a library of ~3,300 DNA probes that exhibited > 96% purity and remarkable uniformity (> 95% of probes within twofold of the median abundance). Additionally, we created a collection of ~9,000 individually accessible CRISPR interference yeast strains for > 99% of genes required for either fermentative or respiratory growth, demonstrating the utility of REDI for rapid and cost-effective creation of strain collections from oligonucleotide pools. Our approach is adaptable to any complex DNA library, and fundamentally changes how these libraries can be parsed, maintained, propagated, and characterized. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.

  8. Reuse of the Cloud Analytics and Collaboration Environment within Tactical Applications (TacApps): A Feasibility Analysis

    DTIC Science & Technology

    2016-03-01

    Representational state transfer  Java messaging service  Java application programming interface (API)  Internet relay chat (IRC)/extensible messaging and...JBoss application server or an Apache Tomcat servlet container instance. The relational database management system can be either PostgreSQL or MySQL ... Java library called direct web remoting. This library has been part of the core CACE architecture for quite some time; however, there have not been

  9. A Library of Phosphoproteomic and Chromatin Signatures for Characterizing Cellular Responses to Drug Perturbations.

    PubMed

    Litichevskiy, Lev; Peckner, Ryan; Abelin, Jennifer G; Asiedu, Jacob K; Creech, Amanda L; Davis, John F; Davison, Desiree; Dunning, Caitlin M; Egertson, Jarrett D; Egri, Shawn; Gould, Joshua; Ko, Tak; Johnson, Sarah A; Lahr, David L; Lam, Daniel; Liu, Zihan; Lyons, Nicholas J; Lu, Xiaodong; MacLean, Brendan X; Mungenast, Alison E; Officer, Adam; Natoli, Ted E; Papanastasiou, Malvina; Patel, Jinal; Sharma, Vagisha; Toder, Courtney; Tubelli, Andrew A; Young, Jennie Z; Carr, Steven A; Golub, Todd R; Subramanian, Aravind; MacCoss, Michael J; Tsai, Li-Huei; Jaffe, Jacob D

    2018-04-25

    Although the value of proteomics has been demonstrated, cost and scale are typically prohibitive, and gene expression profiling remains dominant for characterizing cellular responses to perturbations. However, high-throughput sentinel assays provide an opportunity for proteomics to contribute at a meaningful scale. We present a systematic library resource (90 drugs × 6 cell lines) of proteomic signatures that measure changes in the reduced-representation phosphoproteome (P100) and changes in epigenetic marks on histones (GCP). A majority of these drugs elicited reproducible signatures, but notable cell line- and assay-specific differences were observed. Using the "connectivity" framework, we compared signatures across cell types and integrated data across assays, including a transcriptional assay (L1000). Consistent connectivity among cell types revealed cellular responses that transcended lineage, and consistent connectivity among assays revealed unexpected associations between drugs. We further leveraged the resource against public data to formulate hypotheses for treatment of multiple myeloma and acute lymphocytic leukemia. This resource is publicly available at https://clue.io/proteomics. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  10. Health figures: an open source JavaScript library for health data visualization.

    PubMed

    Ledesma, Andres; Al-Musawi, Mohammed; Nieminen, Hannu

    2016-03-22

    The way we look at data has a great impact on how we can understand it, particularly when the data is related to health and wellness. Due to the increased use of self-tracking devices and the ongoing shift towards preventive medicine, better understanding of our health data is an important part of improving the general welfare of the citizens. Electronic Health Records, self-tracking devices and mobile applications provide a rich variety of data but it often becomes difficult to understand. We implemented the hFigures library inspired on the hGraph visualization with additional improvements. The purpose of the library is to provide a visual representation of the evolution of health measurements in a complete and useful manner. We researched the usefulness and usability of the library by building an application for health data visualization in a health coaching program. We performed a user evaluation with Heuristic Evaluation, Controlled User Testing and Usability Questionnaires. In the Heuristics Evaluation the average response was 6.3 out of 7 points and the Cognitive Walkthrough done by usability experts indicated no design or mismatch errors. In the CSUQ usability test the system obtained an average score of 6.13 out of 7, and in the ASQ usability test the overall satisfaction score was 6.64 out of 7. We developed hFigures, an open source library for visualizing a complete, accurate and normalized graphical representation of health data. The idea is based on the concept of the hGraph but it provides additional key features, including a comparison of multiple health measurements over time. We conducted a usability evaluation of the library as a key component of an application for health and wellness monitoring. The results indicate that the data visualization library was helpful in assisting users in understanding health data and its evolution over time.

  11. LIS Professionals as Knowledge Engineers.

    ERIC Educational Resources Information Center

    Poulter, Alan; And Others

    1994-01-01

    Considers the role of library and information science professionals as knowledge engineers. Highlights include knowledge acquisition, including personal experience, interviews, protocol analysis, observation, multidimensional sorting, printed sources, and machine learning; knowledge representation, including production rules and semantic nets;…

  12. Resource representation in COMPASS

    NASA Technical Reports Server (NTRS)

    Fox, Barry R.

    1991-01-01

    A set of viewgraphs on resource representation in COMPASS is given. COMPASS is an incremental, interactive, non-chronological scheduler written in Ada with an X-windows user interface. Beginning with an empty schedule, activities are added to the schedule one at a time, taking into consideration the placement of the activities already on the timeline and the resources that have been reserved for them. The order that the activities are added to the timeline and their location on the timeline are controlled by selection and placement commands invoked by the user. The order that activities are added to the timeline and their location are independent. The COMPASS code library is a cost effective platform for the development of new scheduling applications. It can be effectively used off the shelf for compatible scheduling applications or it can be used as a parts library for the development of custom scheduling systems.

  13. Information retrieval algorithms: A survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raghavan, P.

    We give an overview of some algorithmic problems arising in the representation of text/image/multimedia objects in a form amenable to automated searching, and in conducting these searches efficiently. These operations are central to information retrieval and digital library systems.

  14. Fatty acid-oxidizing consortia along a nutrient gradient in the Florida Everglades.

    PubMed

    Chauhan, Ashvini; Ogram, Andrew

    2006-04-01

    The Florida Everglades is one of the largest freshwater marshes in North America and has been subject to eutrophication for decades. A gradient in P concentrations extends for several kilometers into the interior of the northern regions of the marsh, and the structure and function of soil microbial communities vary along the gradient. In this study, stable isotope probing was employed to investigate the fate of carbon from the fermentation products propionate and butyrate in soils from three sites along the nutrient gradient. For propionate microcosms, 16S rRNA gene clone libraries from eutrophic and transition sites were dominated by sequences related to previously described propionate oxidizers, such as Pelotomaculum spp. and Syntrophobacter spp. Significant representation was also observed for sequences related to Smithella propionica, which dismutates propionate to butyrate. Sequences of dominant phylotypes from oligotrophic samples did not cluster with known syntrophs but with sulfate-reducing prokaryotes (SRP) and Pelobacter spp. In butyrate microcosms, sequences clustering with Syntrophospora spp. and Syntrophomonas spp. dominated eutrophic microcosms, and sequences related to Pelospora dominated the transition microcosm. Sequences related to Pelospora spp. and SRP dominated clone libraries from oligotrophic microcosms. Sequences from diverse bacterial phyla and primary fermenters were also present in most libraries. Archaeal sequences from eutrophic microcosms included sequences characteristic of Methanomicrobiaceae, Methanospirillaceae, and Methanosaetaceae. Oligotrophic microcosms were dominated by acetotrophs, including sequences related to Methanosarcina, suggesting accumulation of acetate.

  15. Lamar and Me.

    ERIC Educational Resources Information Center

    Breinin, Charles M.

    1992-01-01

    Schools are operated at two levels: practicing teachers and administrators and nonteaching educationists. U.S. Education Secretary Lamar Alexander has no real relationship with teachers (school assembly-line workers) or parents more concerned about sex education, ethnic representation, and library contents than "world class" standards.…

  16. Determination of a Screening Metric for High Diversity DNA Libraries.

    PubMed

    Guido, Nicholas J; Handerson, Steven; Joseph, Elaine M; Leake, Devin; Kung, Li A

    2016-01-01

    The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

  17. Plans and progress for building a Great Lakes fauna DNA ...

    EPA Pesticide Factsheets

    DNA reference libraries provide researchers with an important tool for assessing regional biodiversity by allowing unknown genetic sequences to be assigned identities, while also providing a means for taxonomists to validate identifications. Expanding the representation of Great Lakes species in such reference libraries is an explicit component of research at EPA’s Mid-Continent Ecology Division. Our DNA reference library building efforts began in 2012 with the goal of providing barcodes for at least 5 specimens of each native and nonindigenous fish and aquatic invertebrate species currently present in the Great Lakes. The approach is to pull taxonomically validated specimen for sequencing from EPA led sampling efforts of adult/juvenile fish, larval fish, benthic macroinvertebrates, and zooplankton; while also soliciting aid from state and federal agencies for tissue from “shopping list” organisms. The barcodes we generate are made available through the publicly accessible BOLD (Barcode of Life) database, and help inform a planned Great Lakes biodiversity inventory. To date, our submissions to BOLD are limited to fishes; of the 88 fish species listed as being present within Lake Superior, roughly half were successfully barcoded, while only 22 species met the desired quota of 5 barcoded specimens per species. As we continue to generate genomic information from our collections and the taxonomic representations become more complete, we will continue to

  18. Using Linked Open Data and Semantic Integration to Search Across Geoscience Repositories

    NASA Astrophysics Data System (ADS)

    Mickle, A.; Raymond, L. M.; Shepherd, A.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Fils, D.; Hitzler, P.; Janowicz, K.; Jones, M.; Krisnadhi, A.; Lehnert, K. A.; Narock, T.; Schildhauer, M.; Wiebe, P. H.

    2014-12-01

    The MBLWHOI Library is a partner in the OceanLink project, an NSF EarthCube Building Block, applying semantic technologies to enable knowledge discovery, sharing and integration. OceanLink is testing ontology design patterns that link together: two data repositories, Rolling Deck to Repository (R2R), Biological and Chemical Oceanography Data Management Office (BCO-DMO); the MBLWHOI Library Institutional Repository (IR) Woods Hole Open Access Server (WHOAS); National Science Foundation (NSF) funded awards; and American Geophysical Union (AGU) conference presentations. The Library is collaborating with scientific users, data managers, DSpace engineers, experts in ontology design patterns, and user interface developers to make WHOAS, a DSpace repository, linked open data enabled. The goal is to allow searching across repositories without any of the information providers having to change how they manage their collections. The tools developed for DSpace will be made available to the community of users. There are 257 registered DSpace repositories in the United Stated and over 1700 worldwide. Outcomes include: Integration of DSpace with OpenRDF Sesame triple store to provide SPARQL endpoint for the storage and query of RDF representation of DSpace resources, Mapping of DSpace resources to OceanLink ontology, and DSpace "data" add on to provide resolvable linked open data representation of DSpace resources.

  19. Exploring Representations of Characters with Disabilities in Library Books

    ERIC Educational Resources Information Center

    Price, Charis Lauren; Ostrosky, Michaelene M.; Mouzourou, Chryso

    2016-01-01

    Early literacy experiences are critical for young children's development. More specifically, quality literacy experiences are beneficial to children's understanding of their world. Ensuring that early childhood literature appropriately reflects the diversity of children's life experiences can support their sense of belonging within an early…

  20. Grounded Classification: Grounded Theory and Faceted Classification.

    ERIC Educational Resources Information Center

    Star, Susan Leigh

    1998-01-01

    Compares the qualitative method of grounded theory (GT) with Ranganathan's construction of faceted classifications (FC) in library and information science. Both struggle with a core problem--the representation of vernacular words and processes, empirically discovered, which will, although ethnographically faithful, be powerful beyond the single…

  1. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  2. New Technology for Libraries. A Layman's Guide to Reducing Public Library Costs and Improving Services through Scientific Methods and Tools. A Background Paper for the White House Conference on Library and Information Services.

    ERIC Educational Resources Information Center

    Weisbrod, David L.

    This booklet, one of a series of background papers for the White House Conference, explores the potential of new technologies to improve library services while reducing library costs. Separate subsections describe the application of technology to the following library functions: acquisitions, catalogs and cataloging, serials control, circulation…

  3. Efficient and simpler method to construct normalized cDNA libraries with improved representations of full-length cDNAs

    DOEpatents

    Soares, Marcelo Bento; Bonaldo, Maria de Fatima

    1998-01-01

    This invention provides a method to normalize a cDNA library comprising: (a) constructing a directionally cloned library containing cDNA inserts wherein the insert is capable of being amplified by polymerase chain reaction; (b) converting a double-stranded cDNA library into single-stranded DNA circles; (c) generating single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) by polymerase chain reaction with appropriate primers; (d) hybridizing the single-stranded DNA circles converted in step (b) with the complementary single-stranded nucleic acid molecules generated in step (c) to produce partial duplexes to an appropriate Cot; and (e) separating the unhybridized single-stranded DNA circles from the hybridized DNA circles, thereby generating a normalized cDNA library. This invention also provides a method to normalize a cDNA library wherein the generating of single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) is by excising cDNA inserts from the double-stranded cDNA library; purifying the cDNA inserts from cloning vectors; and digesting the cDNA inserts with an exonuclease. This invention further provides a method to construct a subtractive cDNA library following the steps described above. This invention further provides normalized and/or subtractive cDNA libraries generated by the above methods.

  4. Efficient and simpler method to construct normalized cDNA libraries with improved representations of full-length cDNAs

    DOEpatents

    Soares, M.B.; Fatima Bonaldo, M. de

    1998-12-08

    This invention provides a method to normalize a cDNA library comprising: (a) constructing a directionally cloned library containing cDNA inserts wherein the insert is capable of being amplified by polymerase chain reaction; (b) converting a double-stranded cDNA library into single-stranded DNA circles; (c) generating single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) by polymerase chain reaction with appropriate primers; (d) hybridizing the single-stranded DNA circles converted in step (b) with the complementary single-stranded nucleic acid molecules generated in step (c) to produce partial duplexes to an appropriate Cot; and (e) separating the unhybridized single-stranded DNA circles from the hybridized DNA circles, thereby generating a normalized cDNA library. This invention also provides a method to normalize a cDNA library wherein the generating of single-stranded nucleic acid molecules complementary to the single-stranded DNA circles converted in step (b) is by excising cDNA inserts from the double-stranded cDNA library; purifying the cDNA inserts from cloning vectors; and digesting the cDNA inserts with an exonuclease. This invention further provides a method to construct a subtractive cDNA library following the steps described above. This invention further provides normalized and/or subtractive cDNA libraries generated by the above methods. 25 figs.

  5. Artificial Intelligence and Expert Systems Research and Their Possible Impact on Information Science.

    ERIC Educational Resources Information Center

    Borko, Harold

    1985-01-01

    Defines artificial intelligence (AI) and expert systems; describes library applications utilizing AI to automate creation of document representations, request formulations, and design and modify search strategies for information retrieval systems; discusses expert system development for information services; and reviews impact of these…

  6. Dimensions of Drug Information

    ERIC Educational Resources Information Center

    Sharp, Mark E.

    2011-01-01

    The high number, heterogeneity, and inadequate integration of drug information resources constitute barriers to many drug information usage scenarios. In the biomedical domain there is a rich legacy of knowledge representation in ontology-like structures that allows us to connect this problem both to the very mature field of library and…

  7. PyBioMed: a python library for various molecular representations of chemicals, proteins and DNAs and their interactions.

    PubMed

    Dong, Jie; Yao, Zhi-Jiang; Zhang, Lin; Luo, Feijun; Lin, Qinlu; Lu, Ai-Ping; Chen, Alex F; Cao, Dong-Sheng

    2018-03-20

    With the increasing development of biotechnology and informatics technology, publicly available data in chemistry and biology are undergoing explosive growth. Such wealthy information in these data needs to be extracted and transformed to useful knowledge by various data mining methods. Considering the amazing rate at which data are accumulated in chemistry and biology fields, new tools that process and interpret large and complex interaction data are increasingly important. So far, there are no suitable toolkits that can effectively link the chemical and biological space in view of molecular representation. To further explore these complex data, an integrated toolkit for various molecular representation is urgently needed which could be easily integrated with data mining algorithms to start a full data analysis pipeline. Herein, the python library PyBioMed is presented, which comprises functionalities for online download for various molecular objects by providing different IDs, the pretreatment of molecular structures, the computation of various molecular descriptors for chemicals, proteins, DNAs and their interactions. PyBioMed is a feature-rich and highly customized python library used for the characterization of various complex chemical and biological molecules and interaction samples. The current version of PyBioMed could calculate 775 chemical descriptors and 19 kinds of chemical fingerprints, 9920 protein descriptors based on protein sequences, more than 6000 DNA descriptors from nucleotide sequences, and interaction descriptors from pairwise samples using three different combining strategies. Several examples and five real-life applications were provided to clearly guide the users how to use PyBioMed as an integral part of data analysis projects. By using PyBioMed, users are able to start a full pipelining from getting molecular data, pretreating molecules, molecular representation to constructing machine learning models conveniently. PyBioMed provides various user-friendly and highly customized APIs to calculate various features of biological molecules and complex interaction samples conveniently, which aims at building integrated analysis pipelines from data acquisition, data checking, and descriptor calculation to modeling. PyBioMed is freely available at http://projects.scbdd.com/pybiomed.html .

  8. A scalable architecture for incremental specification and maintenance of procedural and declarative clinical decision-support knowledge.

    PubMed

    Hatsek, Avner; Shahar, Yuval; Taieb-Maimon, Meirav; Shalom, Erez; Klimov, Denis; Lunenfeld, Eitan

    2010-01-01

    Clinical guidelines have been shown to improve the quality of medical care and to reduce its costs. However, most guidelines exist in a free-text representation and, without automation, are not sufficiently accessible to clinicians at the point of care. A prerequisite for automated guideline application is a machine-comprehensible representation of the guidelines. In this study, we designed and implemented a scalable architecture to support medical experts and knowledge engineers in specifying and maintaining the procedural and declarative aspects of clinical guideline knowledge, resulting in a machine comprehensible representation. The new framework significantly extends our previous work on the Digital electronic Guidelines Library (DeGeL) The current study designed and implemented a graphical framework for specification of declarative and procedural clinical knowledge, Gesher. We performed three different experiments to evaluate the functionality and usability of the major aspects of the new framework: Specification of procedural clinical knowledge, specification of declarative clinical knowledge, and exploration of a given clinical guideline. The subjects included clinicians and knowledge engineers (overall, 27 participants). The evaluations indicated high levels of completeness and correctness of the guideline specification process by both the clinicians and the knowledge engineers, although the best results, in the case of declarative-knowledge specification, were achieved by teams including a clinician and a knowledge engineer. The usability scores were high as well, although the clinicians' assessment was significantly lower than the assessment of the knowledge engineers.

  9. SNP discovery by high-throughput sequencing in soybean

    PubMed Central

    2010-01-01

    Background With the advance of new massively parallel genotyping technologies, quantitative trait loci (QTL) fine mapping and map-based cloning become more achievable in identifying genes for important and complex traits. Development of high-density genetic markers in the QTL regions of specific mapping populations is essential for fine-mapping and map-based cloning of economically important genes. Single nucleotide polymorphisms (SNPs) are the most abundant form of genetic variation existing between any diverse genotypes that are usually used for QTL mapping studies. The massively parallel sequencing technologies (Roche GS/454, Illumina GA/Solexa, and ABI/SOLiD), have been widely applied to identify genome-wide sequence variations. However, it is still remains unclear whether sequence data at a low sequencing depth are enough to detect the variations existing in any QTL regions of interest in a crop genome, and how to prepare sequencing samples for a complex genome such as soybean. Therefore, with the aims of identifying SNP markers in a cost effective way for fine-mapping several QTL regions, and testing the validation rate of the putative SNPs predicted with Solexa short sequence reads at a low sequencing depth, we evaluated a pooled DNA fragment reduced representation library and SNP detection methods applied to short read sequences generated by Solexa high-throughput sequencing technology. Results A total of 39,022 putative SNPs were identified by the Illumina/Solexa sequencing system using a reduced representation DNA library of two parental lines of a mapping population. The validation rates of these putative SNPs predicted with low and high stringency were 72% and 85%, respectively. One hundred sixty four SNP markers resulted from the validation of putative SNPs and have been selectively chosen to target a known QTL, thereby increasing the marker density of the targeted region to one marker per 42 K bp. Conclusions We have demonstrated how to quickly identify large numbers of SNPs for fine mapping of QTL regions by applying massively parallel sequencing combined with genome complexity reduction techniques. This SNP discovery approach is more efficient for targeting multiple QTL regions in a same genetic population, which can be applied to other crops. PMID:20701770

  10. Characteristics of Resources Represented in the OCLC CORC Database.

    ERIC Educational Resources Information Center

    Connell, Tschera Harkness; Prabha, Chandra

    2002-01-01

    Examines the characteristics of Web resources in Online Computer Library Center's (OCLC) Cooperative Online Resource Catalog (CORC) in terms of subject matter, source of content, publication patterns, and units of information chosen for representation in the database. Suggests that the ability to successfully use a database depends on…

  11. The Continuity Project. Spring/Summer 1998 Report.

    ERIC Educational Resources Information Center

    Wasilko, Peter J.

    The Continuity Project is a research, development, and technology transfer initiative aimed at creating a Library of the Future by combining features of an online public access catalog (OPAC) and a campuswide information system (CWIS) with advanced facilities drawn from such areas as artificial intelligence (AI), knowledge representation (KR),…

  12. Modeling the missile-launch tube problem in DYSCO

    NASA Technical Reports Server (NTRS)

    Berman, Alex; Gustavson, Bruce A.

    1989-01-01

    DYSCO is a versatile, general purpose dynamic analysis program which assembles equations and solves dynamics problems. The executive manages a library of technology modules which contain routines that compute the matrix coefficients of the second order ordinary differential equations of the components. The executive performs the coupling of the equations of the components and manages the solution of the coupled equations. Any new component representation may be added to the library if, given the state vector, a FORTRAN program can be written to compute M, C, K, and F. The problem described demonstrates the generality of this statement.

  13. Increasing the Coverage of Medicinal Chemistry-Relevant Space in Commercial Fragments Screening

    PubMed Central

    2014-01-01

    Analyzing the chemical space coverage in commercial fragment screening collections revealed the overlap between bioactive medicinal chemistry substructures and rule-of-three compliant fragments is only ∼25%. We recommend including these fragments in fragment screening libraries to maximize confidence in discovering hit matter within known bioactive chemical space, while incorporation of nonoverlapping substructures could offer novel hits in screening libraries. Using principal component analysis, polar and three-dimensional substructures display a higher-than-average enrichment of bioactive compounds, indicating increasing representation of these substructures may be beneficial in fragment screening. PMID:24405118

  14. Discovery of SNPs for individual identification by reduced representation sequencing of moose (Alces alces).

    PubMed

    Blåhed, Ida-Maria; Königsson, Helena; Ericsson, Göran; Spong, Göran

    2018-01-01

    Monitoring of wild animal populations is challenging, yet reliable information about population processes is important for both management and conservation efforts. Access to molecular markers, such as SNPs, enables population monitoring through genotyping of various DNA sources. We have developed 96 high quality SNP markers for individual identification of moose (Alces alces), an economically and ecologically important top-herbivore in boreal regions. Reduced representation libraries constructed from 34 moose were high-throughput de novo sequenced, generating nearly 50 million read pairs. About 50 000 stacks of aligned reads containing one or more SNPs were discovered with the Stacks pipeline. Several quality criteria were applied on the candidate SNPs to find markers informative on the individual level and well representative for the population. An empirical validation by genotyping of sequenced individuals and additional moose, resulted in the selection of a final panel of 86 high quality autosomal SNPs. Additionally, five sex-specific SNPs and five SNPs for sympatric species diagnostics are included in the panel. The genotyping error rate was 0.002 for the total panel and probability of identities were low enough to separate individuals with high confidence. Moreover, the autosomal SNPs were highly informative also for population level analyses. The potential applications of this SNP panel are thus many including investigations of population size, sex ratios, relatedness, reproductive success and population structure. Ideally, SNP-based studies could improve today's population monitoring and increase our knowledge about moose population dynamics.

  15. bioWeb3D: an online webGL 3D data visualisation tool.

    PubMed

    Pettit, Jean-Baptiste; Marioni, John C

    2013-06-07

    Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets.

  16. The effects of nuclear data library processing on Geant4 and MCNP simulations of the thermal neutron scattering law

    NASA Astrophysics Data System (ADS)

    Hartling, K.; Ciungu, B.; Li, G.; Bentoumi, G.; Sur, B.

    2018-05-01

    Monte Carlo codes such as MCNP and Geant4 rely on a combination of physics models and evaluated nuclear data files (ENDF) to simulate the transport of neutrons through various materials and geometries. The grid representation used to represent the final-state scattering energies and angles associated with neutron scattering interactions can significantly affect the predictions of these codes. In particular, the default thermal scattering libraries used by MCNP6.1 and Geant4.10.3 do not accurately reproduce the ENDF/B-VII.1 model in simulations of the double-differential cross section for thermal neutrons interacting with hydrogen nuclei in a thin layer of water. However, agreement between model and simulation can be achieved within the statistical error by re-processing ENDF/B-VII.I thermal scattering libraries with the NJOY code. The structure of the thermal scattering libraries and sampling algorithms in MCNP and Geant4 are also reviewed.

  17. lsjk—a C++ library for arbitrary-precision numeric evaluation of the generalized log-sine functions

    NASA Astrophysics Data System (ADS)

    Kalmykov, M. Yu.; Sheplyakov, A.

    2005-10-01

    Generalized log-sine functions Lsj(k)(θ) appear in higher order ɛ-expansion of different Feynman diagrams. We present an algorithm for the numerical evaluation of these functions for real arguments. This algorithm is implemented as a C++ library with arbitrary-precision arithmetics for integer 0⩽k⩽9 and j⩾2. Some new relations and representations of the generalized log-sine functions are given. Program summaryTitle of program:lsjk Catalogue number:ADVS Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVS Program obtained from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing terms: GNU General Public License Computers:all Operating systems:POSIX Programming language:C++ Memory required to execute:Depending on the complexity of the problem, at least 32 MB RAM recommended No. of lines in distributed program, including testing data, etc.:41 975 No. of bytes in distributed program, including testing data, etc.:309 156 Distribution format:tar.gz Other programs called:The CLN library for arbitrary-precision arithmetics is required at version 1.1.5 or greater External files needed:none Nature of the physical problem:Numerical evaluation of the generalized log-sine functions for real argument in the region 0<θ<π. These functions appear in Feynman integrals Method of solution:Series representation for the real argument in the region 0<θ<π Restriction on the complexity of the problem:Limited up to Lsj(9)(θ), and j is an arbitrary integer number. Thus, all function up to the weight 12 in the region 0<θ<π can be evaluated. The algorithm can be extended up to higher values of k(k>9) without modification Typical running time:Depending on the complexity of problem. See text below.

  18. Imaginary Indians: Representations of Native Americans in Scholastic Reading Club

    ERIC Educational Resources Information Center

    Chaudhri, Amina; Schau, Nicole

    2016-01-01

    Scholastic Reading Clubs are a popular and inexpensive way for teachers to build classroom libraries and for parents to purchase books for their children. The books made accessible to children through the order forms are assumed to be suitable for young readers in terms of their content, popularity, currency, and curricular relevance.…

  19. ARL Physics Web Pages: An Evaluation by Established, Transitional and Emerging Benchmarks.

    ERIC Educational Resources Information Center

    Duffy, Jane C.

    2002-01-01

    Provides an overview of characteristics among Association of Research Libraries (ARL) physics Web pages. Examines current academic Web literature and from that develops six benchmarks to measure physics Web pages: ease of navigation; logic of presentation; representation of all forms of information; engagement of the discipline; interactivity of…

  20. TORPEDO: Networked Access to Full-Text and Page-Image Representations of Physics Journals and Technical Reports.

    ERIC Educational Resources Information Center

    Atkinson, Roderick D.; Stackpole, Laurie E.

    1995-01-01

    The Naval Research Laboratory (NRL) Library and the American Physical Society (APS) are experimenting with electronically disseminating journals and reports in a project called TORPEDO (The Optical Retrieval Project: Electronic Documents Online). Scanned journals and reports are converted to ASCII, then attached to bibliographic information, and…

  1. Multifractal Characterization of Geologic Noise for Improved UXO Detection and Discrimination

    DTIC Science & Technology

    2008-03-01

    12 Recovery of the Universal Multifractal Parameters ...dipole-model to each magnetic anomaly and compares the extracted model parameters with a library of UXO items. They found that remnant magnetization...the survey parameters , and the geologic environment. In this pilot study we have focused on the multifractal representation of natural variations

  2. The Representation of Library Value in Extra-Institutional Evaluations of University Quality

    ERIC Educational Resources Information Center

    Jackson, Brian

    2017-01-01

    The ways in which university quality assessments are developed reveal a great deal about value constructs surrounding higher education. Measures developed and consumed by external stakeholders, in particular, indicate which elements of academia are broadly perceived to be most reflective of quality. This paper examines the historical context of…

  3. A python tool for the implementation of domain-specific languages

    NASA Astrophysics Data System (ADS)

    Dejanović, Igor; Vaderna, Renata; Milosavljević, Gordana; Simić, Miloš; Vuković, Željko

    2017-07-01

    In this paper we describe textX, a meta-language and a tool for building Domain-Specific Languages. It is implemented in Python using Arpeggio PEG (Parsing Expression Grammar) parser library. From a single language description (grammar) textX will build a parser and a meta-model (a.k.a. abstract syntax) of the language. The parser is used to parse textual representations of models conforming to the meta-model. As a result of parsing, a Python object graph will be automatically created. The structure of the object graph will conform to the meta-model defined by the grammar. This approach frees a developer from the need to manually analyse a parse tree and transform it to other suitable representation. The textX library is independent of any integrated development environment and can be easily integrated in any Python project. The textX tool works as a grammar interpreter. The parser is configured at run-time using the grammar. The textX tool is a free and open-source project available at GitHub.

  4. Trying To Reduce Your Technostress?: Helpful Activities for Teachers and Library Media Specialists.

    ERIC Educational Resources Information Center

    McKenzie, Barbara K.; And Others

    1997-01-01

    As pressure increases to integrate technology into instruction, many teachers and library media specialists are having difficulty coping with "technostress." Presents suggestions and activities for teachers and library media specialists designed to reduce "technostress." (PEN)

  5. Functional Requirements for Information Resource Provenance on the Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCusker, James P.; Lebo, Timothy; Graves, Alvaro

    We provide a means to formally explain the relationship between HTTP URLs and the representations returned when they are requested. According to existing World Wide Web architecture, the URL serves as an identier for a semiotic referent while the document returned via HTTP serves as a representation of the same referent. This begins with two sides of a semiotic triangle; the third side is the relationship between the URL and the representation received. We complete this description by extending the library science resource model Functional Requirements for Bibliographic Resources (FRBR) with cryptographic message and content digests to create a Functionalmore » Requirements for Information Resources (FRIR). We show how applying the FRIR model to HTTP GET and POST transactions disambiguates the many relationships between a given URL and all representations received from its request, provides fine-grained explanations that are complementary to existing explanations of web resources, and integrates easily into the emerging W3C provenance standard.« less

  6. Analysis of the Browns Ferry Unit 3 irradiation experiments. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, G.L.

    1984-11-01

    The results of the analysis of two experiments performed at the Browns Ferry-3 reactor are presented. These calculations utilize state-of-the-art neutron transport techniques and a new neutron cross-section library that has been developed for LWR applications. The calculations agree well with the experimental data obtained in irradiations inside the reactor vessel. For the measurements performed in the reactor cavity, the calculations agree well at the reactor midplane. Accurate determination of the axial distribution of the neutron fluence in the reactor cavity depends on having a concise representation of the axial-void distribution in the core. Detailed data are presented describing themore » procedures used in the generation of the new cross-section library that has been named SAILOR. This library is available from the Radiation-Shielding Information Center.« less

  7. Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core

    NASA Astrophysics Data System (ADS)

    Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.

    2009-12-01

    One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel W.

    Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.« less

  9. Reuse, Reduce, Recycle.

    ERIC Educational Resources Information Center

    Briscoe, Georgia

    1991-01-01

    Discussion of recycling paper in law libraries is also applicable to other types of libraries. Results of surveys of law libraries that investigated recycling practices in 1987 and again in 1990 are reported, and suggestions for reducing the amount of paper used and reusing as much as possible are offered. (LRW)

  10. Template-based combinatorial enumeration of virtual compound libraries for lipids

    PubMed Central

    2012-01-01

    A variety of software packages are available for the combinatorial enumeration of virtual libraries for small molecules, starting from specifications of core scaffolds with attachments points and lists of R-groups as SMILES or SD files. Although SD files include atomic coordinates for core scaffolds and R-groups, it is not possible to control 2-dimensional (2D) layout of the enumerated structures generated for virtual compound libraries because different packages generate different 2D representations for the same structure. We have developed a software package called LipidMapsTools for the template-based combinatorial enumeration of virtual compound libraries for lipids. Virtual libraries are enumerated for the specified lipid abbreviations using matching lists of pre-defined templates and chain abbreviations, instead of core scaffolds and lists of R-groups provided by the user. 2D structures of the enumerated lipids are drawn in a specific and consistent fashion adhering to the framework for representing lipid structures proposed by the LIPID MAPS consortium. LipidMapsTools is lightweight, relatively fast and contains no external dependencies. It is an open source package and freely available under the terms of the modified BSD license. PMID:23006594

  11. Template-based combinatorial enumeration of virtual compound libraries for lipids.

    PubMed

    Sud, Manish; Fahy, Eoin; Subramaniam, Shankar

    2012-09-25

    A variety of software packages are available for the combinatorial enumeration of virtual libraries for small molecules, starting from specifications of core scaffolds with attachments points and lists of R-groups as SMILES or SD files. Although SD files include atomic coordinates for core scaffolds and R-groups, it is not possible to control 2-dimensional (2D) layout of the enumerated structures generated for virtual compound libraries because different packages generate different 2D representations for the same structure. We have developed a software package called LipidMapsTools for the template-based combinatorial enumeration of virtual compound libraries for lipids. Virtual libraries are enumerated for the specified lipid abbreviations using matching lists of pre-defined templates and chain abbreviations, instead of core scaffolds and lists of R-groups provided by the user. 2D structures of the enumerated lipids are drawn in a specific and consistent fashion adhering to the framework for representing lipid structures proposed by the LIPID MAPS consortium. LipidMapsTools is lightweight, relatively fast and contains no external dependencies. It is an open source package and freely available under the terms of the modified BSD license.

  12. Nonpoint-Source Pollution Issues. January 1990-November 1994. QB 95-01. Quick Bibliography Series.

    ERIC Educational Resources Information Center

    Makuch, Joe

    Citations in this bibliography are intended to be a substantial resource for recent investigations (January 1990-November 1994) on nonpoint source pollution and were obtained from a search of the National Agriculture Library's AGRICOLA database. The 196 citations are indexed by author and subject. A representation of the search strategy is…

  13. Efficient Representation and Matching of Texts and Images in Scanned Book Collections

    ERIC Educational Resources Information Center

    Yalniz, Ismet Zeki

    2014-01-01

    Millions of books from public libraries and private collections have been scanned by various organizations in the last decade. The motivation is to preserve the written human heritage in electronic format for durable storage and efficient access. The information buried in these large book collections has always been of major interest for scholars…

  14. Performance evaluation of different types of particle representation procedures of Particle Swarm Optimization in Job-shop Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Izah Anuar, Nurul; Saptari, Adi

    2016-02-01

    This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.

  15. Revised Extended Grid Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martz, Roger L.

    The Revised Eolus Grid Library (REGL) is a mesh-tracking library that was developed for use with the MCNP6TM computer code so that (radiation) particles can track on an unstructured mesh. The unstructured mesh is a finite element representation of any geometric solid model created with a state-of-the-art CAE/CAD tool. The mesh-tracking library is written using modern Fortran and programming standards; the library is Fortran 2003 compliant. The library was created with a defined application programmer interface (API) so that it could easily integrate with other particle tracking/transport codes. The library does not handle parallel processing via the message passing interfacemore » (mpi), but has been used successfully where the host code handles the mpi calls. The library is thread-safe and supports the OpenMP paradigm. As a library, all features are available through the API and overall a tight coupling between it and the host code is required. Features of the library are summarized with the following list: Can accommodate first and second order 4, 5, and 6-sided polyhedra; any combination of element types may appear in a single geometry model; parts may not contain tetrahedra mixed with other element types; pentahedra and hexahedra can be together in the same part; robust handling of overlaps and gaps; tracks element-to-element to produce path length results at the element level; finds element numbers for a given mesh location; finds intersection points on element faces for the particle tracks; produce a data file for post processing results analysis; reads Abaqus .inp input (ASCII) files to obtain information for the global mesh-model; supports parallel input processing via mpi; and support parallel particle transport by both mpi and OpenMP.« less

  16. bioWeb3D: an online webGL 3D data visualisation tool

    PubMed Central

    2013-01-01

    Background Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. Results An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Conclusions Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets. PMID:23758781

  17. Right service, right place: optimising utilisation of a community nursing service to reduce planned re-presentations to the emergency department.

    PubMed

    Lawton, Jessica Kirsten; Kinsman, Leigh; Dalton, Lisa; Walsh, Fay; Bryan, Helen; Williams, Sharon

    2017-01-01

    Congruent with international rising emergency department (ED) demand, a focus on strategies and services to reduce burden on EDs and improve patient outcomes is necessary. Planned re-presentations of non-urgent patients at a regional Australian hospital exceeded 1200 visits during the 2013-2014 financial year. Planned re-presentations perpetuate demand and signify a lack of alternative services for non-urgent patients. The Community Nursing Enhanced Connections Service (CoNECS) collaboratively evolved between acute care and community services in 2014 to reduce planned ED re-presentations. This study aimed to investigate the evolution and impact of a community nursing service to reduce planned re-presentations to a regional Australian ED and identify enablers and barriers to interventionist effectiveness. A mixed-methods approach evaluated the impact of CoNECS. Data from hospital databases including measured numbers of planned ED re-presentations by month, time of day, age, gender and reason were used to calculate referral rates to CoNECS. These results informed two semistructured focus groups with ED and community nurses. The researchers used a theoretical lens, 'diffusion of innovation', to understand how this service could inform future interventions. Analyses showed that annual ED planned re-presentations decreased by 43% (527 presentations) after implementation. Three themes emerged from the focus groups. These were right service at the right time, nursing uncertainty and system disconnect and medical disengagement. CoNECS reduced overall ED planned re-presentations and was sustained longer than many complex service-level interventions. Factors supporting the service were endorsement from senior administration and strong leadership to drive responsive quality improvement strategies. This study identified a promising alternative service outside the ED, highlighting possibilities for other hospital emergency services aiming to reduce planned re-presentations.

  18. ImgLib2--generic image processing in Java.

    PubMed

    Pietzsch, Tobias; Preibisch, Stephan; Tomancák, Pavel; Saalfeld, Stephan

    2012-11-15

    ImgLib2 is an open-source Java library for n-dimensional data representation and manipulation with focus on image processing. It aims at minimizing code duplication by cleanly separating pixel-algebra, data access and data representation in memory. Algorithms can be implemented for classes of pixel types and generic access patterns by which they become independent of the specific dimensionality, pixel type and data representation. ImgLib2 illustrates that an elegant high-level programming interface can be achieved without sacrificing performance. It provides efficient implementations of common data types, storage layouts and algorithms. It is the data model underlying ImageJ2, the KNIME Image Processing toolbox and an increasing number of Fiji-Plugins. ImgLib2 is licensed under BSD. Documentation and source code are available at http://imglib2.net and in a public repository at https://github.com/imagej/imglib. Supplementary data are available at Bioinformatics Online. saalfeld@mpi-cbg.de

  19. Rising Costs and Dwindling Budgets Force Libraries to Make Damaging Cuts in Collections and Services.

    ERIC Educational Resources Information Center

    Nicklin, Julie L.

    1992-01-01

    Financial pressures brought on by economic recession and increasing costs of academic materials are causing academic libraries to cancel journal subscriptions, reduce book orders, neglect book preservation, cut staff positions, and reduce general services while seeking new revenue sources. Examples of libraries cutting back include those at…

  20. Amino acid Alphabet Size in Protein Evolution Experiments: Better to Search a Small library Thoroughly or a Large Library Sparsely?

    PubMed Central

    Muñoz, Enrique

    2015-01-01

    We compare the results obtained from searching a smaller library thoroughly versus searching a more diverse, larger library sparsely. We study protein evolution with reduced amino acid alphabets, by simulating directed evolution experiments at three different alphabet sizes: 20, 5 and 2. We employ a physical model for evolution, the generalized NK model, that has proved successful in modeling protein evolution, antibody evolution, and T cell selection. We find that antibodies with higher affinity are found by searching a library with a larger alphabet sparsely than by searching a smaller library thoroughly, even with well-designed reduced libraries. We find ranked amino acid usage frequencies in agreement with observations of the CDR-H3 variable region of human antibodies. PMID:18375453

  1. A Prototype Digital Library for 3D Collections: Tools To Capture, Model, Analyze, and Query Complex 3D Data.

    ERIC Educational Resources Information Center

    Rowe, Jeremy; Razdan, Anshuman

    The Partnership for Research in Spatial Modeling (PRISM) project at Arizona State University (ASU) developed modeling and analytic tools to respond to the limitations of two-dimensional (2D) data representations perceived by affiliated discipline scientists, and to take advantage of the enhanced capabilities of three-dimensional (3D) data that…

  2. Animal Welfare Legislation, Regulations, and Guidelines, January 1990-January 1995. Quick Bibliography Series: QB 95-18.

    ERIC Educational Resources Information Center

    Allen, Tim

    Citations in this bibliography are intended to be a substantial resource for recent investigations (January 1990-January 1995) on animal welfare policy and were obtained from a search of the National Agriculture Library's AGRICOLA database. A representation of the search strategy is included. The 244 citations range in topic and include animal…

  3. Embedding and Publishing Interactive, 3-Dimensional, Scientific Figures in Portable Document Format (PDF) Files

    PubMed Central

    Barnes, David G.; Vidiassov, Michail; Ruthensteiner, Bernhard; Fluke, Christopher J.; Quayle, Michelle R.; McHenry, Colin R.

    2013-01-01

    With the latest release of the S2PLOT graphics library, embedding interactive, 3-dimensional (3-d) scientific figures in Adobe Portable Document Format (PDF) files is simple, and can be accomplished without commercial software. In this paper, we motivate the need for embedding 3-d figures in scholarly articles. We explain how 3-d figures can be created using the S2PLOT graphics library, exported to Product Representation Compact (PRC) format, and included as fully interactive, 3-d figures in PDF files using the movie15 LaTeX package. We present new examples of 3-d PDF figures, explain how they have been made, validate them, and comment on their advantages over traditional, static 2-dimensional (2-d) figures. With the judicious use of 3-d rather than 2-d figures, scientists can now publish, share and archive more useful, flexible and faithful representations of their study outcomes. The article you are reading does not have embedded 3-d figures. The full paper, with embedded 3-d figures, is recommended and is available as a supplementary download from PLoS ONE (File S2). PMID:24086243

  4. Embedding and publishing interactive, 3-dimensional, scientific figures in Portable Document Format (PDF) files.

    PubMed

    Barnes, David G; Vidiassov, Michail; Ruthensteiner, Bernhard; Fluke, Christopher J; Quayle, Michelle R; McHenry, Colin R

    2013-01-01

    With the latest release of the S2PLOT graphics library, embedding interactive, 3-dimensional (3-d) scientific figures in Adobe Portable Document Format (PDF) files is simple, and can be accomplished without commercial software. In this paper, we motivate the need for embedding 3-d figures in scholarly articles. We explain how 3-d figures can be created using the S2PLOT graphics library, exported to Product Representation Compact (PRC) format, and included as fully interactive, 3-d figures in PDF files using the movie15 LaTeX package. We present new examples of 3-d PDF figures, explain how they have been made, validate them, and comment on their advantages over traditional, static 2-dimensional (2-d) figures. With the judicious use of 3-d rather than 2-d figures, scientists can now publish, share and archive more useful, flexible and faithful representations of their study outcomes. The article you are reading does not have embedded 3-d figures. The full paper, with embedded 3-d figures, is recommended and is available as a supplementary download from PLoS ONE (File S2).

  5. Quantum probability ranking principle for ligand-based virtual screening.

    PubMed

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  6. Quantum probability ranking principle for ligand-based virtual screening

    NASA Astrophysics Data System (ADS)

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  7. libmpdata++ 1.0: a library of parallel MPDATA solvers for systems of generalised transport equations

    NASA Astrophysics Data System (ADS)

    Jaruga, A.; Arabas, S.; Jarecka, D.; Pawlowska, H.; Smolarkiewicz, P. K.; Waruszewski, M.

    2015-04-01

    This paper accompanies the first release of libmpdata++, a C++ library implementing the multi-dimensional positive-definite advection transport algorithm (MPDATA) on regular structured grid. The library offers basic numerical solvers for systems of generalised transport equations. The solvers are forward-in-time, conservative and non-linearly stable. The libmpdata++ library covers the basic second-order-accurate formulation of MPDATA, its third-order variant, the infinite-gauge option for variable-sign fields and a flux-corrected transport extension to guarantee non-oscillatory solutions. The library is equipped with a non-symmetric variational elliptic solver for implicit evaluation of pressure gradient terms. All solvers offer parallelisation through domain decomposition using shared-memory parallelisation. The paper describes the library programming interface, and serves as a user guide. Supported options are illustrated with benchmarks discussed in the MPDATA literature. Benchmark descriptions include code snippets as well as quantitative representations of simulation results. Examples of applications include homogeneous transport in one, two and three dimensions in Cartesian and spherical domains; a shallow-water system compared with analytical solution (originally derived for a 2-D case); and a buoyant convection problem in an incompressible Boussinesq fluid with interfacial instability. All the examples are implemented out of the library tree. Regardless of the differences in the problem dimensionality, right-hand-side terms, boundary conditions and parallelisation approach, all the examples use the same unmodified library, which is a key goal of libmpdata++ design. The design, based on the principle of separation of concerns, prioritises the user and developer productivity. The libmpdata++ library is implemented in C++, making use of the Blitz++ multi-dimensional array containers, and is released as free/libre and open-source software.

  8. libmpdata++ 0.1: a library of parallel MPDATA solvers for systems of generalised transport equations

    NASA Astrophysics Data System (ADS)

    Jaruga, A.; Arabas, S.; Jarecka, D.; Pawlowska, H.; Smolarkiewicz, P. K.; Waruszewski, M.

    2014-11-01

    This paper accompanies first release of libmpdata++, a C++ library implementing the Multidimensional Positive-Definite Advection Transport Algorithm (MPDATA). The library offers basic numerical solvers for systems of generalised transport equations. The solvers are forward-in-time, conservative and non-linearly stable. The libmpdata++ library covers the basic second-order-accurate formulation of MPDATA, its third-order variant, the infinite-gauge option for variable-sign fields and a flux-corrected transport extension to guarantee non-oscillatory solutions. The library is equipped with a non-symmetric variational elliptic solver for implicit evaluation of pressure gradient terms. All solvers offer parallelisation through domain decomposition using shared-memory parallelisation. The paper describes the library programming interface, and serves as a user guide. Supported options are illustrated with benchmarks discussed in the MPDATA literature. Benchmark descriptions include code snippets as well as quantitative representations of simulation results. Examples of applications include: homogeneous transport in one, two and three dimensions in Cartesian and spherical domains; shallow-water system compared with analytical solution (originally derived for a 2-D case); and a buoyant convection problem in an incompressible Boussinesq fluid with interfacial instability. All the examples are implemented out of the library tree. Regardless of the differences in the problem dimensionality, right-hand-side terms, boundary conditions and parallelisation approach, all the examples use the same unmodified library, which is a key goal of libmpdata++ design. The design, based on the principle of separation of concerns, prioritises the user and developer productivity. The libmpdata++ library is implemented in C++, making use of the Blitz++ multi-dimensional array containers, and is released as free/libre and open-source software.

  9. Design and Development of a Technology Platform for DNA-Encoded Library Production and Affinity Selection.

    PubMed

    Castañón, Jesús; Román, José Pablo; Jessop, Theodore C; de Blas, Jesús; Haro, Rubén

    2018-06-01

    DNA-encoded libraries (DELs) have emerged as an efficient and cost-effective drug discovery tool for the exploration and screening of very large chemical space using small-molecule collections of unprecedented size. Herein, we report an integrated automation and informatics system designed to enhance the quality, efficiency, and throughput of the production and affinity selection of these libraries. The platform is governed by software developed according to a database-centric architecture to ensure data consistency, integrity, and availability. Through its versatile protocol management functionalities, this application captures the wide diversity of experimental processes involved with DEL technology, keeps track of working protocols in the database, and uses them to command robotic liquid handlers for the synthesis of libraries. This approach provides full traceability of building-blocks and DNA tags in each split-and-pool cycle. Affinity selection experiments and high-throughput sequencing reads are also captured in the database, and the results are automatically deconvoluted and visualized in customizable representations. Researchers can compare results of different experiments and use machine learning methods to discover patterns in data. As of this writing, the platform has been validated through the generation and affinity selection of various libraries, and it has become the cornerstone of the DEL production effort at Lilly.

  10. Reading, Trauma and Literary Caregiving 1914-1918: Helen Mary Gaskell and the War Library.

    PubMed

    Haslam, Sara

    2018-03-28

    This article is about the relationship between reading, trauma and responsive literary caregiving in Britain during the First World War. Its analysis of two little-known documents describing the history of the War Library, begun by Helen Mary Gaskell in 1914, exposes a gap in the scholarship of war-time reading; generates a new narrative of "how," "when," and "why" books went to war; and foregrounds gender in its analysis of the historiography. The Library of Congress's T. W. Koch discovered Gaskell's ground-breaking work in 1917 and reported its successes to the American Library Association. The British Times also covered Gaskell's library, yet researchers working on reading during the war have routinely neglected her distinct model and method, skewing the research base on war-time reading and its association with trauma and caregiving. In the article's second half, a literary case study of a popular war novel demonstrates the extent of the "bitter cry for books." The success of Gaskell's intervention is examined alongside H. G. Wells's representation of textual healing. Reading is shown to offer sick, traumatized and recovering combatants emotional and psychological caregiving in ways that she could not always have predicted and that are not visible in the literary/historical record.

  11. Right service, right place: optimising utilisation of a community nursing service to reduce planned re-presentations to the emergency department

    PubMed Central

    Lawton, Jessica Kirsten; Kinsman, Leigh; Dalton, Lisa; Walsh, Fay; Bryan, Helen; Williams, Sharon

    2017-01-01

    Background Congruent with international rising emergency department (ED) demand, a focus on strategies and services to reduce burden on EDs and improve patient outcomes is necessary. Planned re-presentations of non-urgent patients at a regional Australian hospital exceeded 1200 visits during the 2013–2014 financial year. Planned re-presentations perpetuate demand and signify a lack of alternative services for non-urgent patients. The Community Nursing Enhanced Connections Service (CoNECS) collaboratively evolved between acute care and community services in 2014 to reduce planned ED re-presentations. Objective This study aimed to investigate the evolution and impact of a community nursing service to reduce planned re-presentations to a regional Australian ED and identify enablers and barriers to interventionist effectiveness. Methods A mixed-methods approach evaluated the impact of CoNECS. Data from hospital databases including measured numbers of planned ED re-presentations by month, time of day, age, gender and reason were used to calculate referral rates to CoNECS. These results informed two semistructured focus groups with ED and community nurses. The researchers used a theoretical lens, ‘diffusion of innovation’, to understand how this service could inform future interventions. Results Analyses showed that annual ED planned re-presentations decreased by 43% (527 presentations) after implementation. Three themes emerged from the focus groups. These were right service at the right time, nursing uncertainty and system disconnect and medical disengagement. Conclusions CoNECS reduced overall ED planned re-presentations and was sustained longer than many complex service-level interventions. Factors supporting the service were endorsement from senior administration and strong leadership to drive responsive quality improvement strategies. This study identified a promising alternative service outside the ED, highlighting possibilities for other hospital emergency services aiming to reduce planned re-presentations. PMID:29450293

  12. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  13. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE PAGES

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...

    2017-03-08

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  14. A polygon soup representation for free viewpoint video

    NASA Astrophysics Data System (ADS)

    Colleu, T.; Pateux, S.; Morin, L.; Labit, C.

    2010-02-01

    This paper presents a polygon soup representation for multiview data. Starting from a sequence of multi-view video plus depth (MVD) data, the proposed representation takes into account, in a unified manner, different issues such as compactness, compression, and intermediate view synthesis. The representation is built in two steps. First, a set of 3D quads is extracted using a quadtree decomposition of the depth maps. Second, a selective elimination of the quads is performed in order to reduce inter-view redundancies and thus provide a compact representation. Moreover, the proposed methodology for extracting the representation allows to reduce ghosting artifacts. Finally, an adapted compression technique is proposed that limits coding artifacts. The results presented on two real sequences show that the proposed representation provides a good trade-off between rendering quality and data compactness.

  15. W-tree indexing for fast visual word generation.

    PubMed

    Shi, Miaojing; Xu, Ruixin; Tao, Dacheng; Xu, Chao

    2013-03-01

    The bag-of-visual-words representation has been widely used in image retrieval and visual recognition. The most time-consuming step in obtaining this representation is the visual word generation, i.e., assigning visual words to the corresponding local features in a high-dimensional space. Recently, structures based on multibranch trees and forests have been adopted to reduce the time cost. However, these approaches cannot perform well without a large number of backtrackings. In this paper, by considering the spatial correlation of local features, we can significantly speed up the time consuming visual word generation process while maintaining accuracy. In particular, visual words associated with certain structures frequently co-occur; hence, we can build a co-occurrence table for each visual word for a large-scale data set. By associating each visual word with a probability according to the corresponding co-occurrence table, we can assign a probabilistic weight to each node of a certain index structure (e.g., a KD-tree and a K-means tree), in order to re-direct the searching path to be close to its global optimum within a small number of backtrackings. We carefully study the proposed scheme by comparing it with the fast library for approximate nearest neighbors and the random KD-trees on the Oxford data set. Thorough experimental results suggest the efficiency and effectiveness of the new scheme.

  16. Discrimination of Dynamic Tactile Contact by Temporally Precise Event Sensing in Spiking Neuromorphic Networks

    PubMed Central

    Lee, Wang Wei; Kukreja, Sunil L.; Thakor, Nitish V.

    2017-01-01

    This paper presents a neuromorphic tactile encoding methodology that utilizes a temporally precise event-based representation of sensory signals. We introduce a novel concept where touch signals are characterized as patterns of millisecond precise binary events to denote pressure changes. This approach is amenable to a sparse signal representation and enables the extraction of relevant features from thousands of sensing elements with sub-millisecond temporal precision. We also proposed measures adopted from computational neuroscience to study the information content within the spiking representations of artificial tactile signals. Implemented on a state-of-the-art 4096 element tactile sensor array with 5.2 kHz sampling frequency, we demonstrate the classification of transient impact events while utilizing 20 times less communication bandwidth compared to frame based representations. Spiking sensor responses to a large library of contact conditions were also synthesized using finite element simulations, illustrating an 8-fold improvement in information content and a 4-fold reduction in classification latency when millisecond-precise temporal structures are available. Our research represents a significant advance, demonstrating that a neuromorphic spatiotemporal representation of touch is well suited to rapid identification of critical contact events, making it suitable for dynamic tactile sensing in robotic and prosthetic applications. PMID:28197065

  17. Design Recovery for Software Library Population

    DTIC Science & Technology

    1992-12-01

    increase understandability, efficiency, and maintainability of the software and the design. A good representation choice will also aid in...required for a reengineering project. It details the analysis and planning phase and gives good criteria for determining the need for a reengineering...because it deals with all of these issues. With his complete description of the analysis and planning phase, Byrne has a good foundation for

  18. Towards "Inverse" Character Tables? A One-Step Method for Decomposing Reducible Representations

    ERIC Educational Resources Information Center

    Piquemal, J.-Y.; Losno, R.; Ancian, B.

    2009-01-01

    In the framework of group theory, a new procedure is described for a one-step automated reduction of reducible representations. The matrix inversion tool, provided by standard spreadsheet software, is applied to the central part of the character table that contains the characters of the irreducible representation. This method is not restricted to…

  19. Computer Programs for Library Operations; Results of a Survey Conducted Between Fall 1971 and Spring 1972.

    ERIC Educational Resources Information Center

    Liberman, Eva; And Others

    Many library operations involving large data banks lend themselves readily to computer operation. In setting up library computer programs, in changing or expanding programs, cost in programming and time delays could be substantially reduced if the programmers had access to library computer programs being used by other libraries, providing similar…

  20. Creating a Lean, Green, Library Machine: Easy Eco-Friendly Habits for Your Library

    ERIC Educational Resources Information Center

    Blaine, Amy S.

    2010-01-01

    For some library media specialists, implementing the three Rs of recycling, reducing, and reusing comes easily; they've been environmentally conscious well before the concept of going green made its way into the vernacular. Yet for some of library media specialists, the thought of greening their library, let alone the entire school, can seem…

  1. South Carolina State Library Annual Report 1991-1992.

    ERIC Educational Resources Information Center

    South Carolina State Library, Columbia.

    In fiscal year 1992, state funding to the South Carolina State Library was reduced on four occasions, but the library staff performed at high levels. Despite a 38 percent reduction in the library materials budget, the State Library had its best year ever in terms of providing information, with the number of items loaned continuing to increase. In…

  2. Effects of the antimicrobial sulfamethoxazole on groundwater bacterial enrichment

    USGS Publications Warehouse

    Underwood, Jennifer C.; Harvey, Ronald W.; Metge, David W.; Repert, Deborah A.; Baumgartner, Laura K.; Smith, Richard L.; Roane, Timberly M.; Barber, Larry B.

    2011-01-01

    The effects of “trace” (environmentally relevant) concentrations of the antimicrobial agent sulfamethoxazole (SMX) on the growth, nitrate reduction activity, and bacterial composition of an enrichment culture prepared with groundwater from a pristine zone of a sandy drinking-water aquifer on Cape Cod, MA, were assessed by laboratory incubations. When the enrichments were grown under heterotrophic denitrifying conditions and exposed to SMX, noticeable differences from the control (no SMX) were observed. Exposure to SMX in concentrations as low as 0.005 μM delayed the initiation of cell growth by up to 1 day and decreased nitrate reduction potential (total amount of nitrate reduced after 19 days) by 47% (p = 0.02). Exposure to 1 μM SMX, a concentration below those prescribed for clinical applications but higher than concentrations typically detected in aqueous environments, resulted in additional inhibitions: reduced growth rates (p = 5 × 10−6), lower nitrate reduction rate potentials (p = 0.01), and decreased overall representation of 16S rRNA gene sequences belonging to the genus Pseudomonas. The reduced abundance of Pseudomonas sequences in the libraries was replaced by sequences representing the genus Variovorax. Results of these growth and nitrate reduction experiments collectively suggest that subtherapeutic concentrations of SMX altered the composition of the enriched nitrate-reducing microcosms and inhibited nitrate reduction capabilities.

  3. SPV: a JavaScript Signaling Pathway Visualizer.

    PubMed

    Calderone, Alberto; Cesareni, Gianni

    2018-03-24

    The visualization of molecular interactions annotated in web resources is useful to offer to users such information in a clear intuitive layout. These interactions are frequently represented as binary interactions that are laid out in free space where, different entities, cellular compartments and interaction types are hardly distinguishable. SPV (Signaling Pathway Visualizer) is a free open source JavaScript library which offers a series of pre-defined elements, compartments and interaction types meant to facilitate the representation of signaling pathways consisting of causal interactions without neglecting simple protein-protein interaction networks. freely available under Apache version 2 license; Source code: https://github.com/Sinnefa/SPV_Signaling_Pathway_Visualizer_v1.0. Language: JavaScript; Web technology: Scalable Vector Graphics; Libraries: D3.js. sinnefa@gmail.com.

  4. Identifying Understudied Nuclear Reactions by Text-mining the EXFOR Experimental Nuclear Reaction Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirdt, J.A.; Brown, D.A., E-mail: dbrown@bnl.gov

    The EXFOR library contains the largest collection of experimental nuclear reaction data available as well as the data's bibliographic information and experimental details. We text-mined the REACTION and MONITOR fields of the ENTRYs in the EXFOR library in order to identify understudied reactions and quantities. Using the results of the text-mining, we created an undirected graph from the EXFOR datasets with each graph node representing a single reaction and quantity and graph links representing the various types of connections between these reactions and quantities. This graph is an abstract representation of the connections in EXFOR, similar to graphs of socialmore » networks, authorship networks, etc. We use various graph theoretical tools to identify important yet understudied reactions and quantities in EXFOR. Although we identified a few cross sections relevant for shielding applications and isotope production, mostly we identified charged particle fluence monitor cross sections. As a side effect of this work, we learn that our abstract graph is typical of other real-world graphs.« less

  5. Identifying Understudied Nuclear Reactions by Text-mining the EXFOR Experimental Nuclear Reaction Library

    NASA Astrophysics Data System (ADS)

    Hirdt, J. A.; Brown, D. A.

    2016-01-01

    The EXFOR library contains the largest collection of experimental nuclear reaction data available as well as the data's bibliographic information and experimental details. We text-mined the REACTION and MONITOR fields of the ENTRYs in the EXFOR library in order to identify understudied reactions and quantities. Using the results of the text-mining, we created an undirected graph from the EXFOR datasets with each graph node representing a single reaction and quantity and graph links representing the various types of connections between these reactions and quantities. This graph is an abstract representation of the connections in EXFOR, similar to graphs of social networks, authorship networks, etc. We use various graph theoretical tools to identify important yet understudied reactions and quantities in EXFOR. Although we identified a few cross sections relevant for shielding applications and isotope production, mostly we identified charged particle fluence monitor cross sections. As a side effect of this work, we learn that our abstract graph is typical of other real-world graphs.

  6. RHydro - Hydrological models and tools to represent and analyze hydrological data in R

    NASA Astrophysics Data System (ADS)

    Reusser, Dominik; Buytaert, Wouter

    2010-05-01

    In hydrology, basic equations and procedures keep being implemented from scratch by scientist, with the potential for errors and inefficiency. The use of libraries can overcome these problems. Other scientific disciplines such as mathematics and physics have benefited significantly from such an approach with freely available implementations for many routines. As an example, hydrological libraries could contain: Major representations of hydrological processes such as infiltration, sub-surface runoff and routing algorithms. Scaling functions, for instance to combine remote sensing precipitation fields with rain gauge data Data consistency checks Performance measures. Here we present a beginning for such a library implemented in the high level data programming language R. Currently, Top-model, data import routines for WaSiM-ETH as well basic visualization and evaluation tools are implemented. The design is such, that a definition of import scripts for additional models is sufficient to have access to the full set of evaluation and visualization tools.

  7. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less

  8. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.

  9. Cross-Modal Retrieval With CNN Visual Features: A New Baseline.

    PubMed

    Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng

    2017-02-01

    Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.

  10. Lax representations for matrix short pulse equations

    NASA Astrophysics Data System (ADS)

    Popowicz, Z.

    2017-10-01

    The Lax representation for different matrix generalizations of Short Pulse Equations (SPEs) is considered. The four-dimensional Lax representations of four-component Matsuno, Feng, and Dimakis-Müller-Hoissen-Matsuno equations are obtained. The four-component Feng system is defined by generalization of the two-dimensional Lax representation to the four-component case. This system reduces to the original Feng equation, to the two-component Matsuno equation, or to the Yao-Zang equation. The three-component version of the Feng equation is presented. The four-component version of the Matsuno equation with its Lax representation is given. This equation reduces the new two-component Feng system. The two-component Dimakis-Müller-Hoissen-Matsuno equations are generalized to the four-parameter family of the four-component SPE. The bi-Hamiltonian structure of this generalization, for special values of parameters, is defined. This four-component SPE in special cases reduces to the new two-component SPE.

  11. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  12. Data management routines for reproducible research using the G-Node Python Client library

    PubMed Central

    Sobolev, Andrey; Stoewer, Adrian; Pereira, Michael; Kellner, Christian J.; Garbers, Christian; Rautenberg, Philipp L.; Wachtler, Thomas

    2014-01-01

    Structured, efficient, and secure storage of experimental data and associated meta-information constitutes one of the most pressing technical challenges in modern neuroscience, and does so particularly in electrophysiology. The German INCF Node aims to provide open-source solutions for this domain that support the scientific data management and analysis workflow, and thus facilitate future data access and reproducible research. G-Node provides a data management system, accessible through an application interface, that is based on a combination of standardized data representation and flexible data annotation to account for the variety of experimental paradigms in electrophysiology. The G-Node Python Library exposes these services to the Python environment, enabling researchers to organize and access their experimental data using their familiar tools while gaining the advantages that a centralized storage entails. The library provides powerful query features, including data slicing and selection by metadata, as well as fine-grained permission control for collaboration and data sharing. Here we demonstrate key actions in working with experimental neuroscience data, such as building a metadata structure, organizing recorded data in datasets, annotating data, or selecting data regions of interest, that can be automated to large degree using the library. Compliant with existing de-facto standards, the G-Node Python Library is compatible with many Python tools in the field of neurophysiology and thus enables seamless integration of data organization into the scientific data workflow. PMID:24634654

  13. GOGrapher: A Python library for GO graph representation and analysis.

    PubMed

    Muller, Brian; Richards, Adam J; Jin, Bo; Lu, Xinghua

    2009-07-07

    The Gene Ontology is the most commonly used controlled vocabulary for annotating proteins. The concepts in the ontology are organized as a directed acyclic graph, in which a node corresponds to a biological concept and a directed edge denotes the parent-child semantic relationship between a pair of terms. A large number of protein annotations further create links between proteins and their functional annotations, reflecting the contemporary knowledge about proteins and their functional relationships. This leads to a complex graph consisting of interleaved biological concepts and their associated proteins. What is needed is a simple, open source library that provides tools to not only create and view the Gene Ontology graph, but to analyze and manipulate it as well. Here we describe the development and use of GOGrapher, a Python library that can be used for the creation, analysis, manipulation, and visualization of Gene Ontology related graphs. An object-oriented approach was adopted to organize the hierarchy of the graphs types and associated classes. An Application Programming Interface is provided through which different types of graphs can be pragmatically created, manipulated, and visualized. GOGrapher has been successfully utilized in multiple research projects, e.g., a graph-based multi-label text classifier for protein annotation. The GOGrapher project provides a reusable programming library designed for the manipulation and analysis of Gene Ontology graphs. The library is freely available for the scientific community to use and improve.

  14. Data management routines for reproducible research using the G-Node Python Client library.

    PubMed

    Sobolev, Andrey; Stoewer, Adrian; Pereira, Michael; Kellner, Christian J; Garbers, Christian; Rautenberg, Philipp L; Wachtler, Thomas

    2014-01-01

    Structured, efficient, and secure storage of experimental data and associated meta-information constitutes one of the most pressing technical challenges in modern neuroscience, and does so particularly in electrophysiology. The German INCF Node aims to provide open-source solutions for this domain that support the scientific data management and analysis workflow, and thus facilitate future data access and reproducible research. G-Node provides a data management system, accessible through an application interface, that is based on a combination of standardized data representation and flexible data annotation to account for the variety of experimental paradigms in electrophysiology. The G-Node Python Library exposes these services to the Python environment, enabling researchers to organize and access their experimental data using their familiar tools while gaining the advantages that a centralized storage entails. The library provides powerful query features, including data slicing and selection by metadata, as well as fine-grained permission control for collaboration and data sharing. Here we demonstrate key actions in working with experimental neuroscience data, such as building a metadata structure, organizing recorded data in datasets, annotating data, or selecting data regions of interest, that can be automated to large degree using the library. Compliant with existing de-facto standards, the G-Node Python Library is compatible with many Python tools in the field of neurophysiology and thus enables seamless integration of data organization into the scientific data workflow.

  15. A ROSE-based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D; Panas, T

    2010-01-25

    OpenMP is a popular and evolving programming model for shared-memory platforms. It relies on compilers for optimal performance and to target modern hardware architectures. A variety of extensible and robust research compilers are key to OpenMP's sustainable success in the future. In this paper, we present our efforts to build an OpenMP 3.0 research compiler for C, C++, and Fortran; using the ROSE source-to-source compiler framework. Our goal is to support OpenMP research for ourselves and others. We have extended ROSE's internal representation to handle all of the OpenMP 3.0 constructs and facilitate their manipulation. Since OpenMP research is oftenmore » complicated by the tight coupling of the compiler translations and the runtime system, we present a set of rules to define a common OpenMP runtime library (XOMP) on top of multiple runtime libraries. These rules additionally define how to build a set of translations targeting XOMP. Our work demonstrates how to reuse OpenMP translations across different runtime libraries. This work simplifies OpenMP research by decoupling the problematic dependence between the compiler translations and the runtime libraries. We present an evaluation of our work by demonstrating an analysis tool for OpenMP correctness. We also show how XOMP can be defined using both GOMP and Omni and present comparative performance results against other OpenMP compilers.« less

  16. Reduced graphs and their applications in chemoinformatics.

    PubMed

    Birchall, Kristian; Gillet, Valerie J

    2011-01-01

    Reduced graphs provide summary representations of chemical structures by collapsing groups of connected atoms into single nodes while preserving the topology of the original structures. This chapter reviews the extensive work that has been carried out on reduced graphs at The University of Sheffield and includes discussion of their application to the representation and search of Markush structures in patents, the varied approaches that have been implemented for similarity searching, their use in cluster representation, the different ways in which they have been applied to extract structure-activity relationships and their use in encoding bioisosteres.

  17. Academic Libraries: "Social" or "Communal?" The Nature and Future of Academic Libraries

    ERIC Educational Resources Information Center

    Gayton, Jeffrey T.

    2008-01-01

    The apparent death of academic libraries, as measured by declining circulation of print materials, reduced use of reference services, and falling gate counts, has led to calls for a more "social" approach to academic libraries: installing cafes, expanding group study spaces, and developing "information commons." This study compares these social…

  18. An approach to automated particle picking from electron micrographs based on reduced representation templates.

    PubMed

    Volkmann, Niels

    2004-01-01

    Reduced representation templates are used in a real-space pattern matching framework to facilitate automatic particle picking from electron micrographs. The procedure consists of five parts. First, reduced templates are constructed either from models or directly from the data. Second, a real-space pattern matching algorithm is applied using the reduced representations as templates. Third, peaks are selected from the resulting score map using peak-shape characteristics. Fourth, the surviving peaks are tested for distance constraints. Fifth, a correlation-based outlier screening is applied. Test applications to a data set of keyhole limpet hemocyanin particles indicate that the method is robust and reliable.

  19. Defiant: (DMRs: easy, fast, identification and ANnoTation) identifies differentially Methylated regions from iron-deficient rat hippocampus.

    PubMed

    Condon, David E; Tran, Phu V; Lien, Yu-Chin; Schug, Jonathan; Georgieff, Michael K; Simmons, Rebecca A; Won, Kyoung-Jae

    2018-02-05

    Identification of differentially methylated regions (DMRs) is the initial step towards the study of DNA methylation-mediated gene regulation. Previous approaches to call DMRs suffer from false prediction, use extreme resources, and/or require library installation and input conversion. We developed a new approach called Defiant to identify DMRs. Employing Weighted Welch Expansion (WWE), Defiant showed superior performance to other predictors in the series of benchmarking tests on artificial and real data. Defiant was subsequently used to investigate DNA methylation changes in iron-deficient rat hippocampus. Defiant identified DMRs close to genes associated with neuronal development and plasticity, which were not identified by its competitor. Importantly, Defiant runs between 5 to 479 times faster than currently available software packages. Also, Defiant accepts 10 different input formats widely used for DNA methylation data. Defiant effectively identifies DMRs for whole-genome bisulfite sequencing (WGBS), reduced-representation bisulfite sequencing (RRBS), Tet-assisted bisulfite sequencing (TAB-seq), and HpaII tiny fragment enrichment by ligation-mediated PCR-tag (HELP) assays.

  20. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  1. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE PAGES

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric; ...

    2017-03-06

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  2. Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN

    NASA Astrophysics Data System (ADS)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri

    2017-03-01

    Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.

  3. MIG-seq: an effective PCR-based method for genome-wide single-nucleotide polymorphism genotyping using the next-generation sequencing platform

    PubMed Central

    Suyama, Yoshihisa; Matsuki, Yu

    2015-01-01

    Restriction-enzyme (RE)-based next-generation sequencing methods have revolutionized marker-assisted genetic studies; however, the use of REs has limited their widespread adoption, especially in field samples with low-quality DNA and/or small quantities of DNA. Here, we developed a PCR-based procedure to construct reduced representation libraries without RE digestion steps, representing de novo single-nucleotide polymorphism discovery, and its genotyping using next-generation sequencing. Using multiplexed inter-simple sequence repeat (ISSR) primers, thousands of genome-wide regions were amplified effectively from a wide variety of genomes, without prior genetic information. We demonstrated: 1) Mendelian gametic segregation of the discovered variants; 2) reproducibility of genotyping by checking its applicability for individual identification; and 3) applicability in a wide variety of species by checking standard population genetic analysis. This approach, called multiplexed ISSR genotyping by sequencing, should be applicable to many marker-assisted genetic studies with a wide range of DNA qualities and quantities. PMID:26593239

  4. Taking It to the Stacks: An Inventory Project at the University of Mississippi Libraries

    ERIC Educational Resources Information Center

    Greenwood, Judy T.

    2013-01-01

    This article examines multiple inventory methods and findings from the inventory processes at the University of Mississippi Libraries. In an attempt to reduce user frustration from not being able to locate materials, the University of Mississippi Libraries conducted an inventory process beginning with a pilot inventory of a branch library and a…

  5. Development of SRS.php, a Simple Object Access Protocol-based library for data acquisition from integrated biological databases.

    PubMed

    Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R

    2007-12-11

    Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.

  6. Droplet Digital™ PCR Next-Generation Sequencing Library QC Assay.

    PubMed

    Heredia, Nicholas J

    2018-01-01

    Digital PCR is a valuable tool to quantify next-generation sequencing (NGS) libraries precisely and accurately. Accurately quantifying NGS libraries enable accurate loading of the libraries on to the sequencer and thus improve sequencing performance by reducing under and overloading error. Accurate quantification also benefits users by enabling uniform loading of indexed/barcoded libraries which in turn greatly improves sequencing uniformity of the indexed/barcoded samples. The advantages gained by employing the Droplet Digital PCR (ddPCR™) library QC assay includes the precise and accurate quantification in addition to size quality assessment, enabling users to QC their sequencing libraries with confidence.

  7. Graphical tensor product reduction scheme for the Lie algebras so(5) = sp(2) , su(3) , and g(2)

    NASA Astrophysics Data System (ADS)

    Vlasii, N. D.; von Rütte, F.; Wiese, U.-J.

    2016-08-01

    We develop in detail a graphical tensor product reduction scheme, first described by Antoine and Speiser, for the simple rank 2 Lie algebras so(5) = sp(2) , su(3) , and g(2) . This leads to an efficient practical method to reduce tensor products of irreducible representations into sums of such representations. For this purpose, the 2-dimensional weight diagram of a given representation is placed in a ;landscape; of irreducible representations. We provide both the landscapes and the weight diagrams for a large number of representations for the three simple rank 2 Lie algebras. We also apply the algebraic ;girdle; method, which is much less efficient for calculations by hand for moderately large representations. Computer code for reducing tensor products, based on the graphical method, has been developed as well and is available from the authors upon request.

  8. Successfully Automating Library Consortia: Procedures To Facilitate Governance, Management and Cooperation. DataResearch Automation Guide Series, Number Three.

    ERIC Educational Resources Information Center

    Data Research Associates, Inc., St. Louis, MO.

    Sharing a local automated library system will generally reduce the costs of automation for each participating library and will facilitate the sharing of resources. To set up a consortium, libraries must first identify and agree on governance issues and methods for dealing with these issues. Issues range from ownership, management, and location of…

  9. Shared ownership: what's the future?

    PubMed

    Roth, Karen L

    2013-01-01

    The status of library consortia has evolved over time in terms of their composition and alternative negotiating models. New purchasing models may allow improved library involvement in the acquisitions process and improved methods for meeting users' future needs. Ever-increasing costs of library resources and the need to reduce expenses make it necessary to continue the exploration of library consortia for group purchases.

  10. Consumer language, patient language, and thesauri: a review of the literature

    PubMed Central

    Smith, Catherine A

    2011-01-01

    Objective: Online social networking sites are web services in which users create public or semipublic profiles and connect to build online communities, finding likeminded people through self-labeled personal attributes including ethnicity, leisure interests, political beliefs, and, increasingly, health status. Thirty-nine percent of patients in the United States identified themselves as users of social networks in a recent survey. “Tags,” user-generated descriptors functioning as labels for user-generated content, are increasingly important to social networking, and the language used by patients is thus becoming important for knowledge representation in these systems. However, patient language poses considerable challenges for health communication and networking. How have information systems traditionally incorporated these languages in their controlled vocabularies and thesauri? How do system builders know what consumers and patients say? Methods: This comprehensive review of the literature of health care (PubMed MEDLINE, CINAHL), library science, and information science (Library and Information Science and Technology Abstracts, Library and Information Science Abstracts, and Library Literature) examines the research domains in which consumer and patient language has been explored. Results: Consumer contributions to controlled vocabulary appear to be seriously under-researched inside and outside of health care. Conclusion: The author reflects on the implications of these findings for online social networks devoted to patients and the patient experience. PMID:21464851

  11. Consumer language, patient language, and thesauri: a review of the literature.

    PubMed

    Smith, Catherine A

    2011-04-01

    Online social networking sites are web services in which users create public or semipublic profiles and connect to build online communities, finding like-minded people through self-labeled personal attributes including ethnicity, leisure interests, political beliefs, and, increasingly, health status. Thirty-nine percent of patients in the United States identified themselves as users of social networks in a recent survey. "Tags," user-generated descriptors functioning as labels for user-generated content, are increasingly important to social networking, and the language used by patients is thus becoming important for knowledge representation in these systems. However, patient language poses considerable challenges for health communication and networking. How have information systems traditionally incorporated these languages in their controlled vocabularies and thesauri? How do system builders know what consumers and patients say? This comprehensive review of the literature of health care (PubMed MEDLINE, CINAHL), library science, and information science (Library and Information Science and Technology Abstracts, Library and Information Science Abstracts, and Library Literature) examines the research domains in which consumer and patient language has been explored. Consumer contributions to controlled vocabulary appear to be seriously under-researched inside and outside of health care. The author reflects on the implications of these findings for online social networks devoted to patients and the patient experience.

  12. Optimizing exosomal RNA isolation for RNA-Seq analyses of archival sera specimens.

    PubMed

    Prendergast, Emily N; de Souza Fonseca, Marcos Abraão; Dezem, Felipe Segato; Lester, Jenny; Karlan, Beth Y; Noushmehr, Houtan; Lin, Xianzhi; Lawrenson, Kate

    2018-01-01

    Exosomes are endosome-derived membrane vesicles that contain proteins, lipids, and nucleic acids. The exosomal transcriptome mediates intercellular communication, and represents an understudied reservoir of novel biomarkers for human diseases. Next-generation sequencing enables complex quantitative characterization of exosomal RNAs from diverse sources. However, detailed protocols describing exosome purification for preparation of exosomal RNA-sequence (RNA-Seq) libraries are lacking. Here we compared methods for isolation of exosomes and extraction of exosomal RNA from human cell-free serum, as well as strategies for attaining equal representation of samples within pooled RNA-Seq libraries. We compared commercial precipitation with ultracentrifugation for exosome purification and confirmed the presence of exosomes via both transmission electron microscopy and immunoblotting. Exosomal RNA extraction was compared using four different RNA purification methods. We determined the minimal starting volume of serum required for exosome preparation and showed that high quality exosomal RNA can be isolated from sera stored for over a decade. Finally, RNA-Seq libraries were successfully prepared with exosomal RNAs extracted from human cell-free serum, cataloguing both coding and non-coding exosomal transcripts. This method provides researchers with strategic options to prepare RNA-Seq libraries and compare RNA-Seq data quantitatively from minimal volumes of fresh and archival human cell-free serum for disease biomarker discovery.

  13. Evaluation of reduced point charge models of proteins through Molecular Dynamics simulations: application to the Vps27 UIM-1-Ubiquitin complex.

    PubMed

    Leherte, Laurence; Vercauteren, Daniel P

    2014-02-01

    Reduced point charge models of amino acids are designed, (i) from local extrema positions in charge density distribution functions built from the Poisson equation applied to smoothed molecular electrostatic potential (MEP) functions, and (ii) from local maxima positions in promolecular electron density distribution functions. Corresponding charge values are fitted versus all-atom Amber99 MEPs. To easily generate reduced point charge models for protein structures, libraries of amino acid templates are built. The program GROMACS is used to generate stable Molecular Dynamics trajectories of an Ubiquitin-ligand complex (PDB: 1Q0W), under various implementation schemes, solvation, and temperature conditions. Point charges that are not located on atoms are considered as virtual sites with a nul mass and radius. The results illustrate how the intra- and inter-molecular H-bond interactions are affected by the degree of reduction of the point charge models and give directions for their implementation; a special attention to the atoms selected to locate the virtual sites and to the Coulomb-14 interactions is needed. Results obtained at various temperatures suggest that the use of reduced point charge models allows to probe local potential hyper-surface minima that are similar to the all-atom ones, but are characterized by lower energy barriers. It enables to generate various conformations of the protein complex more rapidly than the all-atom point charge representation. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Construction of BAC Libraries from Flow-Sorted Chromosomes.

    PubMed

    Šafář, Jan; Šimková, Hana; Doležel, Jaroslav

    2016-01-01

    Cloned DNA libraries in bacterial artificial chromosome (BAC) are the most widely used form of large-insert DNA libraries. BAC libraries are typically represented by ordered clones derived from genomic DNA of a particular organism. In the case of large eukaryotic genomes, whole-genome libraries consist of a hundred thousand to a million clones, which make their handling and screening a daunting task. The labor and cost of working with whole-genome libraries can be greatly reduced by constructing a library derived from a smaller part of the genome. Here we describe construction of BAC libraries from mitotic chromosomes purified by flow cytometric sorting. Chromosome-specific BAC libraries facilitate positional gene cloning, physical mapping, and sequencing in complex plant genomes.

  15. HMMER web server: 2018 update.

    PubMed

    Potter, Simon C; Luciani, Aurélien; Eddy, Sean R; Park, Youngmi; Lopez, Rodrigo; Finn, Robert D

    2018-06-14

    The HMMER webserver [http://www.ebi.ac.uk/Tools/hmmer] is a free-to-use service which provides fast searches against widely used sequence databases and profile hidden Markov model (HMM) libraries using the HMMER software suite (http://hmmer.org). The results of a sequence search may be summarized in a number of ways, allowing users to view and filter the significant hits by domain architecture or taxonomy. For large scale usage, we provide an application programmatic interface (API) which has been expanded in scope, such that all result presentations are available via both HTML and API. Furthermore, we have refactored our JavaScript visualization library to provide standalone components for different result representations. These consume the aforementioned API and can be integrated into third-party websites. The range of databases that can be searched against has been expanded, adding four sequence datasets (12 in total) and one profile HMM library (6 in total). To help users explore the biological context of their results, and to discover new data resources, search results are now supplemented with cross references to other EMBL-EBI databases.

  16. Microcomputers in Library Automation.

    ERIC Educational Resources Information Center

    Simpson, George A.

    As librarians cope with reduced budgets, decreased staff, and increased demands for services, microcomputers will take a significant role in library automation by providing low-cost systems, solving specific library problems, and performing in distributed systems. This report presents an introduction to the technology of this low-cost, miniature…

  17. Feasibility of retrofitting a university library with active workstations to reduce sedentary behavior.

    PubMed

    Maeda, Hotaka; Quartiroli, Alessandro; Vos, Paul W; Carr, Lucas J; Mahar, Matthew T

    2014-05-01

    Libraries are an inherently sedentary environment, but are an understudied setting for sedentary behavior interventions. To investigate the feasibility of incorporating portable pedal machines in a university library to reduce sedentary behaviors. The 11-week intervention targeted students at a university library. Thirteen portable pedal machines were placed in the library. Four forms of prompts (e-mail, library website, advertisement monitors, and poster) encouraging pedal machine use were employed during the first 4 weeks. Pedal machine use was measured via automatic timers on each machine and momentary time sampling. Daily library visits were measured using a gate counter. Individualized data were measured by survey. Data were collected in fall 2012 and analyzed in 2013. Mean (SD) cumulative pedal time per day was 95.5 (66.1) minutes. One or more pedal machines were observed being used 15% of the time (N=589). Pedal machines were used at least once by 7% of students (n=527). Controlled for gate count, no linear change of pedal machine use across days was found (b=-0.1 minutes, p=0.75) and the presence of the prompts did not change daily pedal time (p=0.63). Seven of eight items that assessed attitudes toward the intervention supported intervention feasibility (p<0.05). The unique non-individualized approach of retrofitting a library with pedal machines to reduce sedentary behavior seems feasible, but improvement of its effectiveness is needed. This study could inform future studies aimed at reshaping traditionally sedentary settings to improve public health. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  18. Vertex Space Analysis for Model-Based Target Recognition.

    DTIC Science & Technology

    1996-08-01

    performed in our unique invariant representation, Vertex Space, that reduces both the dimensionality and size of the required search space. Vertex Space ... mapping results in a reduced representation that serves as a characteristic target signature which is invariant to four of the six viewing geometry

  19. Reconstructing householder vectors from Tall-Skinny QR

    DOE PAGES

    Ballard, Grey Malone; Demmel, James; Grigori, Laura; ...

    2015-08-05

    The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstratemore » the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. Experiments on supercomputers demonstrate the benefits of the communication cost improvements: in particular, our experiments show substantial improvements over tuned library implementations for tall-and-skinny matrices. Furthermore, we also provide algorithmic improvements to the Householder QR and CAQR algorithms, and we investigate several alternatives to the Householder reconstruction algorithm that sacrifice guarantees on numerical stability in some cases in order to obtain higher performance.« less

  20. Three dimensional adaptive mesh refinement on a spherical shell for atmospheric models with lagrangian coordinates

    NASA Astrophysics Data System (ADS)

    Penner, Joyce E.; Andronova, Natalia; Oehmke, Robert C.; Brown, Jonathan; Stout, Quentin F.; Jablonowski, Christiane; van Leer, Bram; Powell, Kenneth G.; Herzog, Michael

    2007-07-01

    One of the most important advances needed in global climate models is the development of atmospheric General Circulation Models (GCMs) that can reliably treat convection. Such GCMs require high resolution in local convectively active regions, both in the horizontal and vertical directions. During previous research we have developed an Adaptive Mesh Refinement (AMR) dynamical core that can adapt its grid resolution horizontally. Our approach utilizes a finite volume numerical representation of the partial differential equations with floating Lagrangian vertical coordinates and requires resolving dynamical processes on small spatial scales. For the latter it uses a newly developed general-purpose library, which facilitates 3D block-structured AMR on spherical grids. The library manages neighbor information as the blocks adapt, and handles the parallel communication and load balancing, freeing the user to concentrate on the scientific modeling aspects of their code. In particular, this library defines and manages adaptive blocks on the sphere, provides user interfaces for interpolation routines and supports the communication and load-balancing aspects for parallel applications. We have successfully tested the library in a 2-D (longitude-latitude) implementation. During the past year, we have extended the library to treat adaptive mesh refinement in the vertical direction. Preliminary results are discussed. This research project is characterized by an interdisciplinary approach involving atmospheric science, computer science and mathematical/numerical aspects. The work is done in close collaboration between the Atmospheric Science, Computer Science and Aerospace Engineering Departments at the University of Michigan and NOAA GFDL.

  1. GOGrapher: A Python library for GO graph representation and analysis

    PubMed Central

    Muller, Brian; Richards, Adam J; Jin, Bo; Lu, Xinghua

    2009-01-01

    Background The Gene Ontology is the most commonly used controlled vocabulary for annotating proteins. The concepts in the ontology are organized as a directed acyclic graph, in which a node corresponds to a biological concept and a directed edge denotes the parent-child semantic relationship between a pair of terms. A large number of protein annotations further create links between proteins and their functional annotations, reflecting the contemporary knowledge about proteins and their functional relationships. This leads to a complex graph consisting of interleaved biological concepts and their associated proteins. What is needed is a simple, open source library that provides tools to not only create and view the Gene Ontology graph, but to analyze and manipulate it as well. Here we describe the development and use of GOGrapher, a Python library that can be used for the creation, analysis, manipulation, and visualization of Gene Ontology related graphs. Findings An object-oriented approach was adopted to organize the hierarchy of the graphs types and associated classes. An Application Programming Interface is provided through which different types of graphs can be pragmatically created, manipulated, and visualized. GOGrapher has been successfully utilized in multiple research projects, e.g., a graph-based multi-label text classifier for protein annotation. Conclusion The GOGrapher project provides a reusable programming library designed for the manipulation and analysis of Gene Ontology graphs. The library is freely available for the scientific community to use and improve. PMID:19583843

  2. Six Preparedness Strategies for Librarians in Tough Economic Times

    ERIC Educational Resources Information Center

    MacKellar, Pamela

    2010-01-01

    It is no secret that library budgets are in a downward spiral like the rest of the economy. The recent annual budget survey by the "Library Journal" indicates that per capita funding for libraries will decline 1.6%, and total library budgets will be reduced by 2.6% in FY 2010. Librarians are all too familiar with this bad news, and some of them…

  3. MatProps: Material Properties Database and Associated Access Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durrenberger, J K; Becker, R C; Goto, D M

    2007-08-13

    Coefficients for analytic constitutive and equation of state models (EOS), which are used by many hydro codes at LLNL, are currently stored in a legacy material database (Steinberg, UCRL-MA-106349). Parameters for numerous materials are available through this database, and include Steinberg-Guinan and Steinberg-Lund constitutive models for metals, JWL equations of state for high explosives, and Mie-Gruniesen equations of state for metals. These constitutive models are used in most of the simulations done by ASC codes today at Livermore. Analytic EOSs are also still used, but have been superseded in many cases by tabular representations in LEOS (http://leos.llnl.gov). Numerous advanced constitutivemore » models have been developed and implemented into ASC codes over the past 20 years. These newer models have more physics and better representations of material strength properties than their predecessors, and therefore more model coefficients. However, a material database of these coefficients is not readily available. Therefore incorporating these coefficients with those of the legacy models into a portable database that could be shared amongst codes would be most welcome. The goal of this paper is to describe the MatProp effort at LLNL to create such a database and associated access library that could be used by codes throughout the DOE complex and beyond. We have written an initial version of the MatProp database and access library and our DOE/ASC code ALE3D (Nichols et. al., UCRL-MA-152204) is able to import information from the database. The database, a link to which exists on the Sourceforge server at LLNL, contains coefficients for many materials and models (see Appendix), and includes material parameters in the following categories--flow stress, shear modulus, strength, damage, and equation of state. Future versions of the Matprop database and access library will include the ability to read and write material descriptions that can be exchanged between codes. It will also include an ability to do unit changes, i.e. have the library return parameters in user-specified unit systems. In addition to these, additional material categories can be added (e.g., phase change kinetics, etc.). The Matprop database and access library is part of a larger set of tools used at LLNL for assessing material model behavior. One of these is MSlib, a shared constitutive material model library. Another is the Material Strength Database (MSD), which allows users to compare parameter fits for specific constitutive models to available experimental data. Together with Matprop, these tools create a suite of capabilities that provide state-of-the-art models and parameters for those models to integrated simulation codes. This document is broken into several appendices. Appendix A contains a code example to retrieve several material coefficients. Appendix B contains the API for the Matprop data access library. Appendix C contains a list of the material names and model types currently available in the Matprop database. Appendix D contains a list of the parameter names for the currently recognized model types. Appendix E contains a full xml description of the material Tantalum.« less

  4. Digital transcriptome profiling using selective hexamer priming for cDNA synthesis.

    PubMed

    Armour, Christopher D; Castle, John C; Chen, Ronghua; Babak, Tomas; Loerch, Patrick; Jackson, Stuart; Shah, Jyoti K; Dey, John; Rohl, Carol A; Johnson, Jason M; Raymond, Christopher K

    2009-09-01

    We developed a procedure for the preparation of whole transcriptome cDNA libraries depleted of ribosomal RNA from only 1 microg of total RNA. The method relies on a collection of short, computationally selected oligonucleotides, called 'not-so-random' (NSR) primers, to obtain full-length, strand-specific representation of nonribosomal RNA transcripts. In this study we validated the technique by profiling human whole brain and universal human reference RNA using ultra-high-throughput sequencing.

  5. Using Integer Manipulatives: Representational Determinism

    ERIC Educational Resources Information Center

    Bossé, Michael J.; Lynch-Davis, Kathleen; Adu-Gyamfi, Kwaku; Chandler, Kayla

    2016-01-01

    Teachers and students commonly use various concrete representations during mathematical instruction. These representations can be utilized to help students understand mathematical concepts and processes, increase flexibility of thinking, facilitate problem solving, and reduce anxiety while doing mathematics. Unfortunately, the manner in which some…

  6. Sawja: Static Analysis Workshop for Java

    NASA Astrophysics Data System (ADS)

    Hubert, Laurent; Barré, Nicolas; Besson, Frédéric; Demange, Delphine; Jensen, Thomas; Monfort, Vincent; Pichardie, David; Turpin, Tiphaine

    Static analysis is a powerful technique for automatic verification of programs but raises major engineering challenges when developing a full-fledged analyzer for a realistic language such as Java. Efficiency and precision of such a tool rely partly on low level components which only depend on the syntactic structure of the language and therefore should not be redesigned for each implementation of a new static analysis. This paper describes the Sawja library: a static analysis workshop fully compliant with Java 6 which provides OCaml modules for efficiently manipulating Java bytecode programs. We present the main features of the library, including i) efficient functional data-structures for representing a program with implicit sharing and lazy parsing, ii) an intermediate stack-less representation, and iii) fast computation and manipulation of complete programs. We provide experimental evaluations of the different features with respect to time, memory and precision.

  7. Programming gene expression with combinatorial promoters

    PubMed Central

    Cox, Robert Sidney; Surette, Michael G; Elowitz, Michael B

    2007-01-01

    Promoters control the expression of genes in response to one or more transcription factors (TFs). The architecture of a promoter is the arrangement and type of binding sites within it. To understand natural genetic circuits and to design promoters for synthetic biology, it is essential to understand the relationship between promoter function and architecture. We constructed a combinatorial library of random promoter architectures. We characterized 288 promoters in Escherichia coli, each containing up to three inputs from four different TFs. The library design allowed for multiple −10 and −35 boxes, and we observed varied promoter strength over five decades. To further analyze the functional repertoire, we defined a representation of promoter function in terms of regulatory range, logic type, and symmetry. Using these results, we identified heuristic rules for programming gene expression with combinatorial promoters. PMID:18004278

  8. Tuning HDF5 for Lustre File Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Koziol, Quincey; Knaak, David

    2010-09-24

    HDF5 is a cross-platform parallel I/O library that is used by a wide variety of HPC applications for the flexibility of its hierarchical object-database representation of scientific data. We describe our recent work to optimize the performance of the HDF5 and MPI-IO libraries for the Lustre parallel file system. We selected three different HPC applications to represent the diverse range of I/O requirements, and measured their performance on three different systems to demonstrate the robustness of our optimizations across different file system configurations and to validate our optimization strategy. We demonstrate that the combined optimizations improve HDF5 parallel I/O performancemore » by up to 33 times in some cases running close to the achievable peak performance of the underlying file system and demonstrate scalable performance up to 40,960-way concurrency.« less

  9. pysimm: A Python Package for Simulation of Molecular Systems

    NASA Astrophysics Data System (ADS)

    Fortunato, Michael; Colina, Coray

    pysimm, short for python simulation interface for molecular modeling, is a python package designed to facilitate the structure generation and simulation of molecular systems through convenient and programmatic access to object-oriented representations of molecular system data. This poster presents core features of pysimm and design philosophies that highlight a generalized methodology for incorporation of third-party software packages through API interfaces. The integration with the LAMMPS simulation package is explained to demonstrate this methodology. pysimm began as a back-end python library that powered a cloud-based application on nanohub.org for amorphous polymer simulation. The extension from a specific application library to general purpose simulation interface is explained. Additionally, this poster highlights the rapid development of new applications to construct polymer chains capable of controlling chain morphology such as molecular weight distribution and monomer composition.

  10. Reducing codon redundancy and screening effort of combinatorial protein libraries created by saturation mutagenesis.

    PubMed

    Kille, Sabrina; Acevedo-Rocha, Carlos G; Parra, Loreto P; Zhang, Zhi-Gang; Opperman, Diederik J; Reetz, Manfred T; Acevedo, Juan Pablo

    2013-02-15

    Saturation mutagenesis probes define sections of the vast protein sequence space. However, even if randomization is limited this way, the combinatorial numbers problem is severe. Because diversity is created at the codon level, codon redundancy is a crucial factor determining the necessary effort for library screening. Additionally, due to the probabilistic nature of the sampling process, oversampling is required to ensure library completeness as well as a high probability to encounter all unique variants. Our trick employs a special mixture of three primers, creating a degeneracy of 22 unique codons coding for the 20 canonical amino acids. Therefore, codon redundancy and subsequent screening effort is significantly reduced, and a balanced distribution of codon per amino acid is achieved, as demonstrated exemplarily for a library of cyclohexanone monooxygenase. We show that this strategy is suitable for any saturation mutagenesis methodology to generate less-redundant libraries.

  11. Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.

    PubMed

    Jeschek, Markus; Gerngross, Daniel; Panke, Sven

    2016-03-31

    Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.

  12. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  13. Competency-Based Education Programs: A Library Perspective

    ERIC Educational Resources Information Center

    Sanders, Colleen

    2015-01-01

    Competency-based education (CBE) is an emerging model for higher education designed to reduce certain barriers to educational attainment. This essay describes CBE and the challenges and opportunities for academic librarians desiring to serve students and faculty in Library and Information Management Master of Library Science (MLS) programs. Every…

  14. In-Process Items on LCS.

    ERIC Educational Resources Information Center

    Russell, Thyra K.

    Morris Library at Southern Illinois University computerized its technical processes using the Library Computer System (LCS), which was implemented in the library to streamline order processing by: (1) providing up-to-date online files to track in-process items; (2) encouraging quick, efficient accessing of information; (3) reducing manual files;…

  15. 75 FR 22631 - Notice of Continuance for General Clearance for Guidelines, Applications, and Reporting Forms

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-29

    ... Guidelines, Applications, and Reporting Forms AGENCY: Institute of Museum and Library Services. ACTION... Library Service (IMLS) as part of its continuing effort to reduce paperwork and respondent burden... assessed. The Institute of Museum and Library Services is currently soliciting comments on IMLS program...

  16. Outsourcing in American Libraries--An Overview.

    ERIC Educational Resources Information Center

    Bordeianu, Sever; Benaud, Claire-Lise

    1997-01-01

    Discusses the state of outsourcing in American libraries. Highlights include objectives (to reduce cost, increase the quality of service, and achieve a better price/performance objective); operations that can be outsourced; pros and cons; changes in the way library personnel view their work; outsourcing in special, public, academic, and federal…

  17. Time Patterns in Remote OPAC Use.

    ERIC Educational Resources Information Center

    Lucas, Thomas A.

    1993-01-01

    Describes a transaction log analysis of the New York Public Library research libraries' OPAC (online public access catalog). Much of the remote searching occurred when the libraries were closed and was more evenly distributed than internal searching, demonstrating that remote searching could expand access and reduce peak system loads. (Contains…

  18. Creating the User-Friendly Library by Evaluating Patron Perception of Signage.

    ERIC Educational Resources Information Center

    Bosman, Ellen; Rusinek, Carol

    1997-01-01

    Librarians at Indiana University Northwest Library surveyed patrons on how to make the library's collection and services more accessible by improving signage. Examines the effectiveness of signage to instruct users, reduce difficulties and fears, ameliorate negative experiences, and contribute to a user-friendly environment. (AEF)

  19. A Dangerous Occupation? Violence in Public Libraries.

    ERIC Educational Resources Information Center

    Farrugia, Sarah

    2002-01-01

    Outlines the problem of violence in U.S. and British public libraries, including groups incidents, drunks, unruly youths, and irate patrons. Library staff face managerial apathy and reluctance to tackle. Discusses the reasons for violence, suggests measures to reduce threats and deal with incidents, risk assessment, security measures and staff…

  20. Electronic Publishing and Collection Development, a Subscription Agent's View.

    ERIC Educational Resources Information Center

    Wallas, Philip

    Trends in publishing, advances in technology and pressures on library budgets have combined to put libraries and publishers at odds with each other. Research libraries expect broad, easy access to electronic information, greater convenience and faster delivery but at reduced cost. Publishers are exploring new channels for distributing their…

  1. Document Image Parsing and Understanding using Neuromorphic Architecture

    DTIC Science & Technology

    2015-03-01

    processing speed at different layers. In the pattern matching layer, the computing power of multicore processors is explored to reduce the processing...developed to reduce the processing speed at different layers. In the pattern matching layer, the computing power of multicore processors is explored... cortex where the complex data is reduced to abstract representations. The abstract representation is compared to stored patterns in massively parallel

  2. Reduced representation bisulphite sequencing of the cattle genome reveals DNA methylation patterns

    USDA-ARS?s Scientific Manuscript database

    Using reduced representation bisulphite sequencing (RRBS), we obtained the first single-base-resolution maps of bovine DNA methylation in ten somatic tissues. In total, we observed 1,868,049 cytosines in the CG-enriched regions. Similar to the methylation patterns in other species, the CG context wa...

  3. Reduced representation bisulphite sequencing of the ten bovine somatic tissues reveals DNA methylation patterns

    USDA-ARS?s Scientific Manuscript database

    As a major component epigenetics, DNA methylation has been proved that widely functions in individual development and various diseases. It has been well studied in model organisms and human but includes limited data for the economic animals. Using reduced representation bisulphite sequencing (RRBS),...

  4. The Human EST Ontology Explorer: a tissue-oriented visualization system for ontologies distribution in human EST collections.

    PubMed

    Merelli, Ivan; Caprera, Andrea; Stella, Alessandra; Del Corvo, Marcello; Milanesi, Luciano; Lazzari, Barbara

    2009-10-15

    The NCBI dbEST currently contains more than eight million human Expressed Sequenced Tags (ESTs). This wide collection represents an important source of information for gene expression studies, provided it can be inspected according to biologically relevant criteria. EST data can be browsed using different dedicated web resources, which allow to investigate library specific gene expression levels and to make comparisons among libraries, highlighting significant differences in gene expression. Nonetheless, no tool is available to examine distributions of quantitative EST collections in Gene Ontology (GO) categories, nor to retrieve information concerning library-dependent EST involvement in metabolic pathways. In this work we present the Human EST Ontology Explorer (HEOE) http://www.itb.cnr.it/ptp/human_est_explorer, a web facility for comparison of expression levels among libraries from several healthy and diseased tissues. The HEOE provides library-dependent statistics on the distribution of sequences in the GO Direct Acyclic Graph (DAG) that can be browsed at each GO hierarchical level. The tool is based on large-scale BLAST annotation of EST sequences. Due to the huge number of input sequences, this BLAST analysis was performed with the aid of grid computing technology, which is particularly suitable to address data parallel task. Relying on the achieved annotation, library-specific distributions of ESTs in the GO Graph were inferred. A pathway-based search interface was also implemented, for a quick evaluation of the representation of libraries in metabolic pathways. EST processing steps were integrated in a semi-automatic procedure that relies on Perl scripts and stores results in a MySQL database. A PHP-based web interface offers the possibility to simultaneously visualize, retrieve and compare data from the different libraries. Statistically significant differences in GO categories among user selected libraries can also be computed. The HEOE provides an alternative and complementary way to inspect EST expression levels with respect to approaches currently offered by other resources. Furthermore, BLAST computation on the whole human EST dataset was a suitable test of grid scalability in the context of large-scale bioinformatics analysis. The HEOE currently comprises sequence analysis from 70 non-normalized libraries, representing a comprehensive overview on healthy and unhealthy tissues. As the analysis procedure can be easily applied to other libraries, the number of represented tissues is intended to increase.

  5. Stability-Diversity Tradeoffs Impose Fundamental Constraints on Selection of Synthetic Human VH/VL Single-Domain Antibodies from In Vitro Display Libraries

    PubMed Central

    Henry, Kevin A.; Kim, Dae Young; Kandalaft, Hiba; Lowden, Michael J.; Yang, Qingling; Schrag, Joseph D.; Hussack, Greg; MacKenzie, C. Roger; Tanha, Jamshid

    2017-01-01

    Human autonomous VH/VL single-domain antibodies (sdAbs) are attractive therapeutic molecules, but often suffer from suboptimal stability, solubility and affinity for cognate antigens. Most commonly, human sdAbs have been isolated from in vitro display libraries constructed via synthetic randomization of rearranged VH/VL domains. Here, we describe the design and characterization of three novel human VH/VL sdAb libraries through a process of: (i) exhaustive biophysical characterization of 20 potential VH/VL sdAb library scaffolds, including assessment of expression yield, aggregation resistance, thermostability and tolerance to complementarity-determining region (CDR) substitutions; (ii) in vitro randomization of the CDRs of three VH/VL sdAb scaffolds, with tailored amino acid representation designed to promote solubility and expressibility; and (iii) systematic benchmarking of the three VH/VL libraries by panning against five model antigens. We isolated ≥1 antigen-specific human sdAb against four of five targets (13 VHs and 7 VLs in total); these were predominantly monomeric, had antigen-binding affinities ranging from 5 nM to 12 µM (average: 2–3 µM), but had highly variable expression yields (range: 0.1–19 mg/L). Despite our efforts to identify the most stable VH/VL scaffolds, selection of antigen-specific binders from these libraries was unpredictable (overall success rate for all library-target screens: ~53%) with a high attrition rate of sdAbs exhibiting false positive binding by ELISA. By analyzing VH/VL sdAb library sequence composition following selection for monomeric antibody expression (binding to protein A/L followed by amplification in bacterial cells), we found that some VH/VL sdAbs had marked growth advantages over others, and that the amino acid composition of the CDRs of this set of sdAbs was dramatically restricted (bias toward Asp and His and away from aromatic and hydrophobic residues). Thus, CDR sequence clearly dramatically impacts the stability of human autonomous VH/VL immunoglobulin domain folds, and sequence-stability tradeoffs must be taken into account during the design of such libraries. PMID:29375542

  6. Using Lighting Levels to Control Sound Levels in a College Library.

    ERIC Educational Resources Information Center

    Hronek, Beth

    1997-01-01

    Many libraries have noise problems that can't be fixed with ceiling and carpet treatments, physical arrangement, or sound barriers. This study at Henderson Community College (Henderson KY) attempted to confirm results from an earlier study suggesting that reducing light levels led to reduced noise. The data showed mixed results, but overall the…

  7. Automation of Oklahoma School Library Media Centers: Automation at the Local Level.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Education, Oklahoma City. Library and Learning Resources Section.

    This document outlines a workshop for media specialists--"School Library Automation: Solving the Puzzle"--that is designed to reduce automation anxiety and give a broad overview of the concerns confronting school library media centers planning for or involved in automation. Issues are addressed under the following headings: (1) Levels of School…

  8. Public Opinion toward User Fees in Public Libraries.

    ERIC Educational Resources Information Center

    Kinnucan, Mark T.; Estabrook, Leigh; Ferguson, Mark R.

    1998-01-01

    A reanalysis of data from a national telephone poll (n1181) conducted in 1991 determined that if local libraries faced a fiscal crisis, 47% favored raising taxes, 44% preferred instituting user fees, and 9% advocated reducing services. Frequent library use, urban residence, higher level of education, and greater income were associated with a…

  9. 76 FR 31367 - Notice of Proposed Information Collection Requests: Sustaining Digitized Special Collections and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-31

    ... Library Services. ACTION: Notice, request for comments, collection of information. SUMMARY: The Institute of Museum and Library Services (IMLS), as part of its continuing effort to reduce paperwork and..., Institute of Museum and Library Services, 1800 M Street, NW., 9th Floor, Washington, DC 20036. Telephone...

  10. 77 FR 27486 - Notice of Continuance for General Clearance for Guidelines, Applications and Reporting Forms

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-10

    ... Guidelines, Applications and Reporting Forms AGENCY: Institute of Museum and Library Services, National.... SUMMARY: The Institute of Museum and Library Services (IMLS), as part of its continuing effort to reduce... Library Services, 1800 M Street NW., 9th Floor, Washington, DC 20036. Ms. Miller can be reached by...

  11. 76 FR 71080 - Notice of Proposed Information Collection Requests: Let's Move Museums, Let's Move Gardens

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ..., library, and information services. The policy research, analysis, and data collection is used to: Identify... Requests: Let's Move Museums, Let's Move Gardens AGENCY: Institute of Museum and Library Services, National.... SUMMARY: The Institute of Museum and Library Services (IMLS), as part of its continuing effort to reduce...

  12. Modeling of frequency agile devices: development of PKI neuromodeling library based on hierarchical network structure

    NASA Astrophysics Data System (ADS)

    Sanchez, P.; Hinojosa, J.; Ruiz, R.

    2005-06-01

    Recently, neuromodeling methods of microwave devices have been developed. These methods are suitable for the model generation of novel devices. They allow fast and accurate simulations and optimizations. However, the development of libraries makes these methods to be a formidable task, since they require massive input-output data provided by an electromagnetic simulator or measurements and repeated artificial neural network (ANN) training. This paper presents a strategy reducing the cost of library development with the advantages of the neuromodeling methods: high accuracy, large range of geometrical and material parameters and reduced CPU time. The library models are developed from a set of base prior knowledge input (PKI) models, which take into account the characteristics common to all the models in the library, and high-level ANNs which give the library model outputs from base PKI models. This technique is illustrated for a microwave multiconductor tunable phase shifter using anisotropic substrates. Closed-form relationships have been developed and are presented in this paper. The results show good agreement with the expected ones.

  13. FDTD Simulation of Novel Polarimetric and Directional Reflectance and Transmittance Measurements from Optical Nano- and Micro-Structured Materials

    DTIC Science & Technology

    2012-03-22

    structures and lead to better designs. 84 Appendix A. Particle Swarm Optimization Algorithm In order to validate the need for a new BSDF model ...24 9. Hierarchy representation of a subset of ScatMech BSDF library model classes...polarimetric BRDF at λ=4.3μm of SPP structures with Λ=1.79μm (left), 2μm (middle) and 2.33μm (right). All components are normalized by dividing by s0

  14. Reduced Pseudoneglect for Physical Space, but Not Mental Representations of Space, for Adults with Autistic Traits

    ERIC Educational Resources Information Center

    English, Michael C.; Maybery, Murray T.; Visser, Troy A.

    2017-01-01

    Neurotypical individuals display a leftward attentional bias, called pseudoneglect, for physical space (e.g. landmark task) and mental representations of space (e.g. mental number line bisection). However, leftward bias is reduced in autistic individuals viewing faces, and neurotypical individuals with autistic traits viewing "greyscale"…

  15. Visual memory transformations in dyslexia.

    PubMed

    Barnes, James; Hinkley, Lisa; Masters, Stuart; Boubert, Laura

    2007-06-01

    Representational Momentum refers to observers' distortion of recognition memory for pictures that imply motion because of an automatic mental process which extrapolates along the implied trajectory of the picture. Neuroimaging evidence suggests that activity in the magnocellular visual pathway is necessary for representational momentum to occur. It has been proposed that individuals with dyslexia have a magnocellular deficit, so it was hypothesised that these individuals would show reduced or absent representational momentum. In this study, 30 adults with dyslexia and 30 age-matched controls were compared on two tasks, one linear and one rotation, which had previously elicited the representational momentum effect. Analysis indicated significant differences in the performance of the two groups, with the dyslexia group having a reduced susceptibility to representational momentum in both linear and rotational directions. The findings highlight that deficits in temporal spatial processing may contribute to the perceptual profile of dyslexia.

  16. Analysis of cDNA libraries from developing seeds of guar (Cyamopsis tetragonoloba (L.) Taub)

    PubMed Central

    Naoumkina, Marina; Torres-Jerez, Ivone; Allen, Stacy; He, Ji; Zhao, Patrick X; Dixon, Richard A; May, Gregory D

    2007-01-01

    Background Guar, Cyamopsis tetragonoloba (L.) Taub, is a member of the Leguminosae (Fabaceae) family and is economically the most important of the four species in the genus. The endosperm of guar seed is a rich source of mucilage or gum, which forms a viscous gel in cold water, and is used as an emulsifier, thickener and stabilizer in a wide range of foods and industrial applications. Guar gum is a galactomannan, consisting of a linear (1→4)-β-linked D-mannan backbone with single-unit, (1→6)-linked, α-D-galactopyranosyl side chains. To better understand regulation of guar seed development and galactomannan metabolism we created cDNA libraries and a resulting EST dataset from different developmental stages of guar seeds. Results A database of 16,476 guar seed ESTs was constructed, with 8,163 and 8,313 ESTs derived from cDNA libraries I and II, respectively. Library I was constructed from seeds at an early developmental stage (15–25 days after flowering, DAF), and library II from seeds at 30–40 DAF. Quite different sets of genes were represented in these two libraries. Approximately 27% of the clones were not similar to known sequences, suggesting that these ESTs represent novel genes or may represent non-coding RNA. The high flux of energy into carbohydrate and storage protein synthesis in guar seeds was reflected by a high representation of genes annotated as involved in signal transduction, carbohydrate metabolism, chaperone and proteolytic processes, and translation and ribosome structure. Guar unigenes involved in galactomannan metabolism were identified. Among the seed storage proteins, the most abundant contig represented a conglutin accounting for 3.7% of the total ESTs from both libraries. Conclusion The present EST collection and its annotation provide a resource for understanding guar seed biology and galactomannan metabolism. PMID:18034910

  17. High efficiency family shuffling based on multi-step PCR and in vivo DNA recombination in yeast: statistical and functional analysis of a combinatorial library between human cytochrome P450 1A1 and 1A2.

    PubMed

    Abécassis, V; Pompon, D; Truan, G

    2000-10-15

    The design of a family shuffling strategy (CLERY: Combinatorial Libraries Enhanced by Recombination in Yeast) associating PCR-based and in vivo recombination and expression in yeast is described. This strategy was tested using human cytochrome P450 CYP1A1 and CYP1A2 as templates, which share 74% nucleotide sequence identity. Construction of highly shuffled libraries of mosaic structures and reduction of parental gene contamination were two major goals. Library characterization involved multiprobe hybridization on DNA macro-arrays. The statistical analysis of randomly selected clones revealed a high proportion of chimeric genes (86%) and a homogeneous representation of the parental contribution among the sequences (55.8 +/- 2.5% for parental sequence 1A2). A microtiter plate screening system was designed to achieve colorimetric detection of polycyclic hydrocarbon hydroxylation by transformed yeast cells. Full sequences of five randomly picked and five functionally selected clones were analyzed. Results confirmed the shuffling efficiency and allowed calculation of the average length of sequence exchange and mutation rates. The efficient and statistically representative generation of mosaic structures by this type of family shuffling in a yeast expression system constitutes a novel and promising tool for structure-function studies and tuning enzymatic activities of multicomponent eucaryote complexes involving non-soluble enzymes.

  18. EAGLE: 'EAGLE'Is an' Algorithmic Graph Library for Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-01-16

    The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. Today there is no tools to conduct "graph mining" on RDF standard data sets. We address that need through implementation of popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, degree distribution,more » diversity degree, PageRank, etc.). We implement these algorithms as SPARQL queries, wrapped within Python scripts and call our software tool as EAGLE. In RDF style, EAGLE stands for "EAGLE 'Is an' algorithmic graph library for exploration. EAGLE is like 'MATLAB' for 'Linked Data.'« less

  19. Simplified Interface to Complex Memory Hierarchies 1.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Michael; Ionkov, Latchesar; Williams, Sean

    2017-02-21

    Memory systems are expected to get evermore complicated in the coming years, and it isn't clear exactly what form that complexity will take. On the software side, a simple, flexible way of identifying and working with memory pools is needed. Additionally, most developers seek code portability and do not want to learn the intricacies of complex memory. Hence, we believe that a library for interacting with complex memory systems should expose two kinds of abstraction: First, a low-level, mechanism-based interface designed for the runtime or advanced user that wants complete control, with its focus on simplified representation but with allmore » decisions left to the caller. Second, a high-level, policy-based interface designed for ease of use for the application developer, in which we aim for best-practice decisions based on application intent. We have developed such a library, called SICM: Simplified Interface to Complex Memory.« less

  20. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    DOEpatents

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  1. GPU-accelerated simulations of isolated black holes

    NASA Astrophysics Data System (ADS)

    Lewis, Adam G. M.; Pfeiffer, Harald P.

    2018-05-01

    We present a port of the numerical relativity code SpEC which is capable of running on NVIDIA GPUs. Since this code must be maintained in parallel with SpEC itself, a primary design consideration is to perform as few explicit code changes as possible. We therefore rely on a hierarchy of automated porting strategies. At the highest level we use TLoops, a C++ library of our design, to automatically emit CUDA code equivalent to tensorial expressions written into C++ source using a syntax similar to analytic calculation. Next, we trace out and cache explicit matrix representations of the numerous linear transformations in the SpEC code, which allows these to be performed on the GPU using pre-existing matrix-multiplication libraries. We port the few remaining important modules by hand. In this paper we detail the specifics of our port, and present benchmarks of it simulating isolated black hole spacetimes on several generations of NVIDIA GPU.

  2. Ligation Bias in Illumina Next-Generation DNA Libraries: Implications for Sequencing Ancient Genomes

    PubMed Central

    Seguin-Orlando, Andaine; Schubert, Mikkel; Clary, Joel; Stagegaard, Julia; Alberdi, Maria T.; Prado, José Luis; Prieto, Alfredo; Willerslev, Eske; Orlando, Ludovic

    2013-01-01

    Ancient DNA extracts consist of a mixture of endogenous molecules and contaminant DNA templates, often originating from environmental microbes. These two populations of templates exhibit different chemical characteristics, with the former showing depurination and cytosine deamination by-products, resulting from post-mortem DNA damage. Such chemical modifications can interfere with the molecular tools used for building second-generation DNA libraries, and limit our ability to fully characterize the true complexity of ancient DNA extracts. In this study, we first use fresh DNA extracts to demonstrate that library preparation based on adapter ligation at AT-overhangs are biased against DNA templates starting with thymine residues, contrarily to blunt-end adapter ligation. We observe the same bias on fresh DNA extracts sheared on Bioruptor, Covaris and nebulizers. This contradicts previous reports suggesting that this bias could originate from the methods used for shearing DNA. This also suggests that AT-overhang adapter ligation efficiency is affected in a sequence-dependent manner and results in an uneven representation of different genomic contexts. We then show how this bias could affect the base composition of ancient DNA libraries prepared following AT-overhang ligation, mainly by limiting the ability to ligate DNA templates starting with thymines and therefore deaminated cytosines. This results in particular nucleotide misincorporation damage patterns, deviating from the signature generally expected for authenticating ancient sequence data. Consequently, we show that models adequate for estimating post-mortem DNA damage levels must be robust to the molecular tools used for building ancient DNA libraries. PMID:24205269

  3. Status of the AIAA Modeling and Simulation Format Standard

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Hildreth, Bruce L.

    2008-01-01

    The current draft AIAA Standard for flight simulation models represents an on-going effort to improve the productivity of practitioners of the art of digital flight simulation (one of the original digital computer applications). This initial release provides the capability for the efficient representation and exchange of an aerodynamic model in full fidelity; the DAVE-ML format can be easily imported (with development of site-specific import tools) in an unambiguous way with automatic verification. An attractive feature of the standard is the ability to coexist with existing legacy software or tools. The draft Standard is currently limited in scope to static elements of dynamic flight simulations; however, these static elements represent the bulk of typical flight simulation mathematical models. It is already seeing application within U.S. and Australian government agencies in an effort to improve productivity and reduce model rehosting overhead. An existing tool allows import of DAVE-ML models into a popular simulation modeling and analysis tool, and other community-contributed tools and libraries can simplify the use of DAVE-ML compliant models at compile- or run-time of high-fidelity flight simulation.

  4. Reduced set averaging of face identity in children and adolescents with autism.

    PubMed

    Rhodes, Gillian; Neumann, Markus F; Ewing, Louise; Palermo, Romina

    2015-01-01

    Individuals with autism have difficulty abstracting and updating average representations from their diet of faces. These averages function as perceptual norms for coding faces, and poorly calibrated norms may contribute to face recognition difficulties in autism. Another kind of average, known as an ensemble representation, can be abstracted from briefly glimpsed sets of faces. Here we show for the first time that children and adolescents with autism also have difficulty abstracting ensemble representations from sets of faces. On each trial, participants saw a study set of four identities and then indicated whether a test face was present. The test face could be a set average or a set identity, from either the study set or another set. Recognition of set averages was reduced in participants with autism, relative to age- and ability-matched typically developing participants. This difference, which actually represents more accurate responding, indicates weaker set averaging and thus weaker ensemble representations of face identity in autism. Our finding adds to the growing evidence for atypical abstraction of average face representations from experience in autism. Weak ensemble representations may have negative consequences for face processing in autism, given the importance of ensemble representations in dealing with processing capacity limitations.

  5. Visual learning with reduced adaptation is eccentricity-specific.

    PubMed

    Harris, Hila; Sagi, Dov

    2018-01-12

    Visual learning is known to be specific to the trained target location, showing little transfer to untrained locations. Recently, learning was shown to transfer across equal-eccentricity retinal-locations when sensory adaptation due to repetitive stimulation was minimized. It was suggested that learning transfers to previously untrained locations when the learned representation is location invariant, with sensory adaptation introducing location-dependent representations, thus preventing transfer. Spatial invariance may also fail when the trained and tested locations are at different distance from the center of gaze (different retinal eccentricities), due to differences in the corresponding low-level cortical representations (e.g. allocated cortical area decreases with eccentricity). Thus, if learning improves performance by better classifying target-dependent early visual representations, generalization is predicted to fail when locations of different retinal eccentricities are trained and tested in the absence sensory adaptation. Here, using the texture discrimination task, we show specificity of learning across different retinal eccentricities (4-8°) using reduced adaptation training. The existence of generalization across equal-eccentricity locations but not across different eccentricities demonstrates that learning accesses visual representations preceding location independent representations, with specificity of learning explained by inhomogeneous sensory representation.

  6. Reducing Check-in Errors at Brigham Young University through Statistical Process Control

    ERIC Educational Resources Information Center

    Spackman, N. Andrew

    2005-01-01

    The relationship between the library and its patrons is damaged and the library's reputation suffers when returned items are not checked in. An informal survey reveals librarians' concern for this problem and their efforts to combat it, although few libraries collect objective measurements of errors or the effects of improvement efforts. Brigham…

  7. Targeted RNA-Sequencing with Competitive Multiplex-PCR Amplicon Libraries

    PubMed Central

    Blomquist, Thomas M.; Crawford, Erin L.; Lovett, Jennie L.; Yeo, Jiyoun; Stanoszek, Lauren M.; Levin, Albert; Li, Jia; Lu, Mei; Shi, Leming; Muldrew, Kenneth; Willey, James C.

    2013-01-01

    Whole transcriptome RNA-sequencing is a powerful tool, but is costly and yields complex data sets that limit its utility in molecular diagnostic testing. A targeted quantitative RNA-sequencing method that is reproducible and reduces the number of sequencing reads required to measure transcripts over the full range of expression would be better suited to diagnostic testing. Toward this goal, we developed a competitive multiplex PCR-based amplicon sequencing library preparation method that a) targets only the sequences of interest and b) controls for inter-target variation in PCR amplification during library preparation by measuring each transcript native template relative to a known number of synthetic competitive template internal standard copies. To determine the utility of this method, we intentionally selected PCR conditions that would cause transcript amplification products (amplicons) to converge toward equimolar concentrations (normalization) during library preparation. We then tested whether this approach would enable accurate and reproducible quantification of each transcript across multiple library preparations, and at the same time reduce (through normalization) total sequencing reads required for quantification of transcript targets across a large range of expression. We demonstrate excellent reproducibility (R2 = 0.997) with 97% accuracy to detect 2-fold change using External RNA Controls Consortium (ERCC) reference materials; high inter-day, inter-site and inter-library concordance (R2 = 0.97–0.99) using FDA Sequencing Quality Control (SEQC) reference materials; and cross-platform concordance with both TaqMan qPCR (R2 = 0.96) and whole transcriptome RNA-sequencing following “traditional” library preparation using Illumina NGS kits (R2 = 0.94). Using this method, sequencing reads required to accurately quantify more than 100 targeted transcripts expressed over a 107-fold range was reduced more than 10,000-fold, from 2.3×109 to 1.4×105 sequencing reads. These studies demonstrate that the competitive multiplex-PCR amplicon library preparation method presented here provides the quality control, reproducibility, and reduced sequencing reads necessary for development and implementation of targeted quantitative RNA-sequencing biomarkers in molecular diagnostic testing. PMID:24236095

  8. Automatic generation of efficient array redistribution routines for distributed memory multicomputers

    NASA Technical Reports Server (NTRS)

    Ramaswamy, Shankar; Banerjee, Prithviraj

    1994-01-01

    Appropriate data distribution has been found to be critical for obtaining good performance on Distributed Memory Multicomputers like the CM-5, Intel Paragon and IBM SP-1. It has also been found that some programs need to change their distributions during execution for better performance (redistribution). This work focuses on automatically generating efficient routines for redistribution. We present a new mathematical representation for regular distributions called PITFALLS and then discuss algorithms for redistribution based on this representation. One of the significant contributions of this work is being able to handle arbitrary source and target processor sets while performing redistribution. Another important contribution is the ability to handle an arbitrary number of dimensions for the array involved in the redistribution in a scalable manner. Our implementation of these techniques is based on an MPI-like communication library. The results presented show the low overheads for our redistribution algorithm as compared to naive runtime methods.

  9. Integrated platform and API for electrophysiological data

    PubMed Central

    Sobolev, Andrey; Stoewer, Adrian; Leonhardt, Aljoscha; Rautenberg, Philipp L.; Kellner, Christian J.; Garbers, Christian; Wachtler, Thomas

    2014-01-01

    Recent advancements in technology and methodology have led to growing amounts of increasingly complex neuroscience data recorded from various species, modalities, and levels of study. The rapid data growth has made efficient data access and flexible, machine-readable data annotation a crucial requisite for neuroscientists. Clear and consistent annotation and organization of data is not only an important ingredient for reproducibility of results and re-use of data, but also essential for collaborative research and data sharing. In particular, efficient data management and interoperability requires a unified approach that integrates data and metadata and provides a common way of accessing this information. In this paper we describe GNData, a data management platform for neurophysiological data. GNData provides a storage system based on a data representation that is suitable to organize data and metadata from any electrophysiological experiment, with a functionality exposed via a common application programming interface (API). Data representation and API structure are compatible with existing approaches for data and metadata representation in neurophysiology. The API implementation is based on the Representational State Transfer (REST) pattern, which enables data access integration in software applications and facilitates the development of tools that communicate with the service. Client libraries that interact with the API provide direct data access from computing environments like Matlab or Python, enabling integration of data management into the scientist's experimental or analysis routines. PMID:24795616

  10. DeepMeSH: deep semantic representation for improving large-scale MeSH indexing.

    PubMed

    Peng, Shengwen; You, Ronghui; Wang, Hongning; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-06-15

    Medical Subject Headings (MeSH) indexing, which is to assign a set of MeSH main headings to citations, is crucial for many important tasks in biomedical text mining and information retrieval. Large-scale MeSH indexing has two challenging aspects: the citation side and MeSH side. For the citation side, all existing methods, including Medical Text Indexer (MTI) by National Library of Medicine and the state-of-the-art method, MeSHLabeler, deal with text by bag-of-words, which cannot capture semantic and context-dependent information well. We propose DeepMeSH that incorporates deep semantic information for large-scale MeSH indexing. It addresses the two challenges in both citation and MeSH sides. The citation side challenge is solved by a new deep semantic representation, D2V-TFIDF, which concatenates both sparse and dense semantic representations. The MeSH side challenge is solved by using the 'learning to rank' framework of MeSHLabeler, which integrates various types of evidence generated from the new semantic representation. DeepMeSH achieved a Micro F-measure of 0.6323, 2% higher than 0.6218 of MeSHLabeler and 12% higher than 0.5637 of MTI, for BioASQ3 challenge data with 6000 citations. The software is available upon request. zhusf@fudan.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  11. Integrated platform and API for electrophysiological data.

    PubMed

    Sobolev, Andrey; Stoewer, Adrian; Leonhardt, Aljoscha; Rautenberg, Philipp L; Kellner, Christian J; Garbers, Christian; Wachtler, Thomas

    2014-01-01

    Recent advancements in technology and methodology have led to growing amounts of increasingly complex neuroscience data recorded from various species, modalities, and levels of study. The rapid data growth has made efficient data access and flexible, machine-readable data annotation a crucial requisite for neuroscientists. Clear and consistent annotation and organization of data is not only an important ingredient for reproducibility of results and re-use of data, but also essential for collaborative research and data sharing. In particular, efficient data management and interoperability requires a unified approach that integrates data and metadata and provides a common way of accessing this information. In this paper we describe GNData, a data management platform for neurophysiological data. GNData provides a storage system based on a data representation that is suitable to organize data and metadata from any electrophysiological experiment, with a functionality exposed via a common application programming interface (API). Data representation and API structure are compatible with existing approaches for data and metadata representation in neurophysiology. The API implementation is based on the Representational State Transfer (REST) pattern, which enables data access integration in software applications and facilitates the development of tools that communicate with the service. Client libraries that interact with the API provide direct data access from computing environments like Matlab or Python, enabling integration of data management into the scientist's experimental or analysis routines.

  12. Continuous energy adjoint transport for photons in PHITS

    NASA Astrophysics Data System (ADS)

    Malins, Alex; Machida, Masahiko; Niita, Koji

    2017-09-01

    Adjoint Monte Carlo can be an effcient algorithm for solving photon transport problems where the size of the tally is relatively small compared to the source. Such problems are typical in environmental radioactivity calculations, where natural or fallout radionuclides spread over a large area contribute to the air dose rate at a particular location. Moreover photon transport with continuous energy representation is vital for accurately calculating radiation protection quantities. Here we describe the incorporation of an adjoint Monte Carlo capability for continuous energy photon transport into the Particle and Heavy Ion Transport code System (PHITS). An adjoint cross section library for photon interactions was developed based on the JENDL- 4.0 library, by adding cross sections for adjoint incoherent scattering and pair production. PHITS reads in the library and implements the adjoint transport algorithm by Hoogenboom. Adjoint pseudo-photons are spawned within the forward tally volume and transported through space. Currently pseudo-photons can undergo coherent and incoherent scattering within the PHITS adjoint function. Photoelectric absorption is treated implicitly. The calculation result is recovered from the pseudo-photon flux calculated over the true source volume. A new adjoint tally function facilitates this conversion. This paper gives an overview of the new function and discusses potential future developments.

  13. Texture Classification by Texton: Statistical versus Binary

    PubMed Central

    Guo, Zhenhua; Zhang, Zhongcheng; Li, Xiu; Li, Qin; You, Jane

    2014-01-01

    Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8), image patch (Statistical_Joint) and locally invariant fractal (Statistical_Fractal) are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor. PMID:24520346

  14. Testing of the ABBN-RF multigroup data library in photon transport calculations

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Lomakov, Gleb; Manturov, Gennady; Tsiboulia, Anatoly

    2017-09-01

    Gamma radiation is produced via both of nuclear fuel and shield materials. Photon interaction is known with appropriate accuracy, but secondary gamma ray production known much less. The purpose of this work is studying secondary gamma ray production data from neutron induced reactions in iron and lead by using MCNP code and modern nuclear data as ROSFOND, ENDF/B-7.1, JEFF-3.2 and JENDL-4.0. Results of calculations show that all of these nuclear data have different photon production data from neutron induced reactions and have poor agreement with evaluated benchmark experiment. The ABBN-RF multigroup cross-section library is based on the ROSFOND data. It presented in two forms of micro cross sections: ABBN and MATXS formats. Comparison of group-wise calculations using both ABBN and MATXS data to point-wise calculations with the ROSFOND library shows a good agreement. The discrepancies between calculation and experimental C/E results in neutron spectra are in the limit of experimental errors. For the photon spectrum they are out of experimental errors. Results of calculations using group-wise and point-wise representation of cross sections show a good agreement both for photon and neutron spectra.

  15. A CAD Approach to Developing Mass Distribution and Composition Models for Spaceflight Radiation Risk Analyses

    NASA Astrophysics Data System (ADS)

    Zapp, E.; Shelfer, T.; Semones, E.; Johnson, A.; Weyland, M.; Golightly, M.; Smith, G.; Dardano, C.

    For roughly the past three decades, combinatorial geometries have been the predominant mode for the development of mass distribution models associated with the estimation of radiological risk for manned space flight. Examples of these are the MEVDP (Modified Elemental Volume Dose Program) vehicle representation of Liley and Hamilton, and the quadratic functional representation of the CAM/CAF (Computerized Anatomical Male/Female) human body models as modified by Billings and Yucker. These geometries, have the advantageous characteristics of being simple for a familiarized user to maintain, and because of the relative lack of any operating system or run-time library dependence, they are also easy to transfer from one computing platform to another. Unfortunately they are also limited in the amount of modeling detail possible, owing to the abstract geometric representation. In addition, combinatorial representations are also known to be error-prone in practice, since there is no convenient method for error identification (i.e. overlap, etc.), and extensive calculation and/or manual comparison may is often necessary to demonstrate that the geometry is adequately represented. We present an alternate approach linking materials -specific, CAD-based mass models directly to geometric analysis tools requiring no approximation with respect to materials , nor any meshing (i.e. tessellation) of the representative geometry. A new approach to ray tracing is presented which makes use of the fundamentals of the CAD representation to perform geometric analysis directly on the NURBS (Non-Uniform Rational BSpline) surfaces themselves. In this way we achieve a framework for- the rapid, precise development and analysis of materials-specific mass distribution models.

  16. Three-dimensional model-based object recognition and segmentation in cluttered scenes.

    PubMed

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2006-10-01

    Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.

  17. Sharing Vital Signs between mobile phone applications.

    PubMed

    Karlen, Walter; Dumont, Guy A; Scheffer, Cornie

    2014-01-01

    We propose a communication library, ShareVitalSigns, for the standardized exchange of vital sign information between health applications running on mobile platforms. The library allows an application to request one or multiple vital signs from independent measurement applications on the Android OS. Compatible measurement applications are automatically detected and can be launched from within the requesting application, simplifying the work flow for the user and reducing typing errors. Data is shared between applications using intents, a passive data structure available on Android OS. The library is accompanied by a test application which serves as a demonstrator. The secure exchange of vital sign information using a standardized library like ShareVitalSigns will facilitate the integration of measurement applications into diagnostic and other high level health monitoring applications and reduce errors due to manual entry of information.

  18. MISSION LentiPlex pooled shRNA library screening in mammalian cells.

    PubMed

    Coussens, Matthew J; Corman, Courtney; Fischer, Ashley L; Sago, Jack; Swarthout, John

    2011-12-21

    RNA interference (RNAi) is an intrinsic cellular mechanism for the regulation of gene expression. Harnessing the innate power of this system enables us to knockdown gene expression levels in loss of gene function studies. There are two main methods for performing RNAi. The first is the use of small interfering RNAs (siRNAs) that are chemically synthesized, and the second utilizes short-hairpin RNAs (shRNAs) encoded within plasmids. The latter can be transfected into cells directly or packaged into replication incompetent lentiviral particles. The main advantages of using lentiviral shRNAs is the ease of introduction into a wide variety of cell types, their ability to stably integrate into the genome for long term gene knockdown and selection, and their efficacy in conducting high-throughput loss of function screens. To facilitate this we have created the LentiPlex pooled shRNA library. The MISSION LentiPlex Human shRNA Pooled Library is a genome-wide lentiviral pool produced using a proprietary process. The library consists of over 75,000 shRNA constructs from the TRC collection targeting 15,000+ human genes. Each library is tested for shRNA representation before product release to ensure robust library coverage. The library is provided in a ready-to-use lentiviral format at titers of at least 5 x 10(8) TU/ml via p24 assay and is pre-divided into ten subpools of approximately 8,000 shRNA constructs each. Amplification and sequencing primers are also provided for downstream target identification. Previous studies established a synergistic antitumor activity of TRAIL when combined with Paclitaxel in A549 cells, a human lung carcinoma cell line. In this study we demonstrate the application of a pooled LentiPlex shRNA library to rapidly conduct a positive selection screen for genes involved in the cytotoxicity of A549 cells when exposed to TRAIL and Paclitaxel. One barrier often encountered with high-throughput screens is the cost and difficulty in deconvolution; we also detail a cost-effective polyclonal approach utilizing traditional sequencing.

  19. 2D biological representations with reduced speckle obtained from two perpendicular ultrasonic arrays.

    PubMed

    Rodriguez-Hernandez, Miguel A; Gomez-Sacristan, Angel; Sempere-Payá, Víctor M

    2016-04-29

    Ultrasound diagnosis is a widely used medical tool. Among the various ultrasound techniques, ultrasonic imaging is particularly relevant. This paper presents an improvement to a two-dimensional (2D) ultrasonic system using measurements taken from perpendicular planes, where digital signal processing techniques are used to combine one-dimensional (1D) A-scans were acquired by individual transducers in arrays located in perpendicular planes. An algorithm used to combine measurements is improved based on the wavelet transform, which includes a denoising step during the 2D representation generation process. The inclusion of this new denoising stage generates higher quality 2D representations with a reduced level of speckling. The paper includes different 2D representations obtained from noisy A-scans and compares the improvements obtained by including the denoising stage.

  20. Integration of Sparse Multi-modality Representation and Geometrical Constraint for Isointense Infant Brain Segmentation

    PubMed Central

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H.; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6–8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods. PMID:24505729

  1. Integration of sparse multi-modality representation and geometrical constraint for isointense infant brain segmentation.

    PubMed

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2013-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.

  2. Ontological representation, integration, and analysis of LINCS cell line cells and their cellular responses.

    PubMed

    Ong, Edison; Xie, Jiangan; Ni, Zhaohui; Liu, Qingping; Sarntivijai, Sirarat; Lin, Yu; Cooper, Daniel; Terryn, Raymond; Stathias, Vasileios; Chung, Caty; Schürer, Stephan; He, Yongqun

    2017-12-21

    Aiming to understand cellular responses to different perturbations, the NIH Common Fund Library of Integrated Network-based Cellular Signatures (LINCS) program involves many institutes and laboratories working on over a thousand cell lines. The community-based Cell Line Ontology (CLO) is selected as the default ontology for LINCS cell line representation and integration. CLO has consistently represented all 1097 LINCS cell lines and included information extracted from the LINCS Data Portal and ChEMBL. Using MCF 10A cell line cells as an example, we demonstrated how to ontologically model LINCS cellular signatures such as their non-tumorigenic epithelial cell type, three-dimensional growth, latrunculin-A-induced actin depolymerization and apoptosis, and cell line transfection. A CLO subset view of LINCS cell lines, named LINCS-CLOview, was generated to support systematic LINCS cell line analysis and queries. In summary, LINCS cell lines are currently associated with 43 cell types, 131 tissues and organs, and 121 cancer types. The LINCS-CLO view information can be queried using SPARQL scripts. CLO was used to support ontological representation, integration, and analysis of over a thousand LINCS cell line cells and their cellular responses.

  3. Database technology and the management of multimedia data in the Mirror project

    NASA Astrophysics Data System (ADS)

    de Vries, Arjen P.; Blanken, H. M.

    1998-10-01

    Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.

  4. Ctrl "C"-Ctrl "V"; Using Gaming Peripherals to Improve Library Workflows and Enhance Staff Efficiency

    ERIC Educational Resources Information Center

    Litsey, Ryan; Harris, Rea; London, Jessie

    2018-01-01

    Library workflows are an area where repetitive stress can potentially reduce staff efficiency. Day to day activities that require a repetitive motion can bring about physical and psychological fatigue. For library managers, it is important to seek ways in which this type of repetitive stress can be alleviated while having the added benefit of…

  5. Convective Cloud and Rainfall Processes Over the Maritime Continent: Simulation and Analysis of the Diurnal Cycle

    NASA Astrophysics Data System (ADS)

    Gianotti, Rebecca L.

    The Maritime Continent experiences strong moist convection, which produces significant rainfall and drives large fluxes of heat and moisture to the upper troposphere. Despite the importance of these processes to global circulations, current predictions of climate change over this region are still highly uncertain, largely due to inadequate representation of the diurnally-varying processes related to convection. In this work, a coupled numerical model of the land-atmosphere system (RegCM3-IBIS) is used to investigate how more physically-realistic representations of these processes can be incorporated into large-scale climate models. In particular, this work improves simulations of convective-radiative feedbacks and the role of cumulus clouds in mediating the diurnal cycle of rainfall. Three key contributions are made to the development of RegCM3-IBIS. Two pieces of work relate directly to the formation and dissipation of convective clouds: a new representation of convective cloud cover, and a new parameterization of convective rainfall production. These formulations only contain parameters that can be directly quantified from observational data, are independent of model user choices such as domain size or resolution, and explicitly account for subgrid variability in cloud water content and nonlinearities in rainfall production. The third key piece of work introduces a new method for representation of cloud formation within the boundary layer. A comprehensive evaluation of the improved model was undertaken using a range of satellite-derived and ground-based datasets, including a new dataset from Singapore's Changi airport that documents diurnal variation of the local boundary layer height. The performance of RegCM3-IBIS with the new formulations is greatly improved across all evaluation metrics, including cloud cover, cloud liquid water, radiative fluxes and rainfall, indicating consistent improvement in physical realism throughout the simulation. This work demonstrates that: (1) moist convection strongly influences the near surface environment by mediating the incoming solar radiation and net radiation at the surface; (2) dissipation of convective cloud via rainfall plays an equally important role in the convectiveradiative feedback as the formation of that cloud; and (3) over parts of the Maritime Continent, rainfall is a product of diurnally-varying convective processes that operate at small spatial scales, on the order of 1 km. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs@mit.edu)

  6. The Boy Factor: Can Single-Gender Classes Reduce the Over-Representation of Boys in Special Education?

    ERIC Educational Resources Information Center

    Piechura-Couture, Kathy; Heins, Elizabeth; Tichenor, Mercedes

    2013-01-01

    Since the early 1990s numerous studies have concluded that there is an over-representation of males and minorities in special education. This paper examines the question if a different educational format, such as single-gender education, can help boys' behavior and thus reduce the number of special education referrals? The rationale for…

  7. The Boy Factor: Can Single-Gender Classes Reduce the Over-Representation of Boys in Special Education?

    ERIC Educational Resources Information Center

    Piechura-Couture, Kathy; Heins, Elizabeth; Tichenor, Mercedes

    2011-01-01

    Since the early 1990s numerous studies have concluded that there is an over-representation of males and minorities in special education. This paper examines the question if a different educational format, such as single-gender education, can help boys' behavior and thus reduce the number of special education referrals? The rationale for…

  8. Low-frequency chimeric yeast artificial chromosome libraries from flow-sorted human chromosomes 16 and 21.

    PubMed Central

    McCormick, M K; Campbell, E; Deaven, L; Moyzis, R

    1993-01-01

    Construction of chromosome-specific yeast artificial chromosome (YAC) libraries from sorted chromosomes was undertaken (i) to eliminate drawbacks associated with first-generation total genomic YAC libraries, such as the high frequency of chimeric YACs, and (ii) to provide an alternative method for generating chromosome-specific YAC libraries in addition to isolating such collections from a total genomic library. Chromosome-specific YAC libraries highly enriched for human chromosomes 16 and 21 were constructed. By maximizing the percentage of fragments with two ligatable ends and performing yeast transformations with less than saturating amounts of DNA in the presence of carrier DNA, YAC libraries with a low percentage of chimeric clones were obtained. The smaller number of YAC clones in these chromosome-specific libraries reduces the effort involved in PCR-based screening and allows hybridization methods to be a manageable screening approach. Images PMID:8430075

  9. Efficient preparation of shuffled DNA libraries through recombination (Gateway) cloning.

    PubMed

    Lehtonen, Soili I; Taskinen, Barbara; Ojala, Elina; Kukkurainen, Sampo; Rahikainen, Rolle; Riihimäki, Tiina A; Laitinen, Olli H; Kulomaa, Markku S; Hytönen, Vesa P

    2015-01-01

    Efficient and robust subcloning is essential for the construction of high-diversity DNA libraries in the field of directed evolution. We have developed a more efficient method for the subcloning of DNA-shuffled libraries by employing recombination cloning (Gateway). The Gateway cloning procedure was performed directly after the gene reassembly reaction, without additional purification and amplification steps, thus simplifying the conventional DNA shuffling protocols. Recombination-based cloning, directly from the heterologous reassembly reaction, conserved the high quality of the library and reduced the time required for the library construction. The described method is generally compatible for the construction of DNA-shuffled gene libraries. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Selection dynamic of Escherichia coli host in M13 combinatorial peptide phage display libraries.

    PubMed

    Zanconato, Stefano; Minervini, Giovanni; Poli, Irene; De Lucrezia, Davide

    2011-01-01

    Phage display relies on an iterative cycle of selection and amplification of random combinatorial libraries to enrich the initial population of those peptides that satisfy a priori chosen criteria. The effectiveness of any phage display protocol depends directly on library amino acid sequence diversity and the strength of the selection procedure. In this study we monitored the dynamics of the selective pressure exerted by the host organism on a random peptide library in the absence of any additional selection pressure. The results indicate that sequence censorship exerted by Escherichia coli dramatically reduces library diversity and can significantly impair phage display effectiveness.

  11. Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.

    PubMed

    Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi

    2017-09-22

    DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.

  12. The Effect of Economic Inflation on Local Public Library Support in Kentucky, 1967-1976.

    ERIC Educational Resources Information Center

    Smith, Robert C.

    This study was developed on the hypothesis that economic inflation reduced the purchasing power of local support for public libraries from 1967 through fiscal 1976. The total local support for public libraries in each county in each year of the study was adjusted from the reported nominal amount to the relative Consumer Price Index value of 1967…

  13. Stability-Diversity Tradeoffs Impose Fundamental Constraints on Selection of Synthetic Human VH/VL Single-Domain Antibodies from In Vitro Display Libraries.

    PubMed

    Henry, Kevin A; Kim, Dae Young; Kandalaft, Hiba; Lowden, Michael J; Yang, Qingling; Schrag, Joseph D; Hussack, Greg; MacKenzie, C Roger; Tanha, Jamshid

    2017-01-01

    Human autonomous V H /V L single-domain antibodies (sdAbs) are attractive therapeutic molecules, but often suffer from suboptimal stability, solubility and affinity for cognate antigens. Most commonly, human sdAbs have been isolated from in vitro display libraries constructed via synthetic randomization of rearranged V H /V L domains. Here, we describe the design and characterization of three novel human V H /V L sdAb libraries through a process of: (i) exhaustive biophysical characterization of 20 potential V H /V L sdAb library scaffolds, including assessment of expression yield, aggregation resistance, thermostability and tolerance to complementarity-determining region (CDR) substitutions; (ii) in vitro randomization of the CDRs of three V H /V L sdAb scaffolds, with tailored amino acid representation designed to promote solubility and expressibility; and (iii) systematic benchmarking of the three V H /V L libraries by panning against five model antigens. We isolated ≥1 antigen-specific human sdAb against four of five targets (13 V H s and 7 V L s in total); these were predominantly monomeric, had antigen-binding affinities ranging from 5 nM to 12 µM (average: 2-3 µM), but had highly variable expression yields (range: 0.1-19 mg/L). Despite our efforts to identify the most stable V H /V L scaffolds, selection of antigen-specific binders from these libraries was unpredictable (overall success rate for all library-target screens: ~53%) with a high attrition rate of sdAbs exhibiting false positive binding by ELISA. By analyzing V H /V L sdAb library sequence composition following selection for monomeric antibody expression (binding to protein A/L followed by amplification in bacterial cells), we found that some V H /V L sdAbs had marked growth advantages over others, and that the amino acid composition of the CDRs of this set of sdAbs was dramatically restricted (bias toward Asp and His and away from aromatic and hydrophobic residues). Thus, CDR sequence clearly dramatically impacts the stability of human autonomous V H /V L immunoglobulin domain folds, and sequence-stability tradeoffs must be taken into account during the design of such libraries.

  14. On the genre-fication of music: a percolation approach

    NASA Astrophysics Data System (ADS)

    Lambiotte, R.; Ausloos, M.

    2006-03-01

    We analyze web-downloaded data on people sharing their music library. By attributing to each music group usual music genres (Rock, Pop ...), and analysing correlations between music groups of different genres with percolation-idea based methods, we probe the reality of these subdivisions and construct a music genre cartography, with a tree representation. We also discuss an alternative objective way to classify music, that is based on the complex structure of the groups audience. Finally, a link is drawn with the theory of hidden variables in complex networks.

  15. Proceedings of the Conference on Behavior Representation in Modeling and Simulation (19th), held in Charleston, South Carolina, 21 - 24 March 2010

    DTIC Science & Technology

    2010-03-01

    user performance .” “Wonderful. I can clearly see how, as a practitioner in industry , I can apply this to the numerous projects I work on.” Proceedings...CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ...effects on performance ), libraries of basic human operator procedures (how-to knowledge) and geometries for building scenarios graphically (that

  16. PedVizApi: a Java API for the interactive, visual analysis of extended pedigrees.

    PubMed

    Fuchsberger, Christian; Falchi, Mario; Forer, Lukas; Pramstaller, Peter P

    2008-01-15

    PedVizApi is a Java API (application program interface) for the visual analysis of large and complex pedigrees. It provides all the necessary functionality for the interactive exploration of extended genealogies. While available packages are mostly focused on a static representation or cannot be added to an existing application, PedVizApi is a highly flexible open source library for the efficient construction of visual-based applications for the analysis of family data. An extensive demo application and a R interface is provided. http://www.pedvizapi.org

  17. Consequences of Normalizing Transcriptomic and Genomic Libraries of Plant Genomes Using a Duplex-Specific Nuclease and Tetramethylammonium Chloride

    PubMed Central

    Froenicke, Lutz; Lavelle, Dean; Martineau, Belinda; Perroud, Bertrand; Michelmore, Richard

    2013-01-01

    Several applications of high throughput genome and transcriptome sequencing would benefit from a reduction of the high-copy-number sequences in the libraries being sequenced and analyzed, particularly when applied to species with large genomes. We adapted and analyzed the consequences of a method that utilizes a thermostable duplex-specific nuclease for reducing the high-copy components in transcriptomic and genomic libraries prior to sequencing. This reduces the time, cost, and computational effort of obtaining informative transcriptomic and genomic sequence data for both fully sequenced and non-sequenced genomes. It also reduces contamination from organellar DNA in preparations of nuclear DNA. Hybridization in the presence of 3 M tetramethylammonium chloride (TMAC), which equalizes the rates of hybridization of GC and AT nucleotide pairs, reduced the bias against sequences with high GC content. Consequences of this method on the reduction of high-copy and enrichment of low-copy sequences are reported for Arabidopsis and lettuce. PMID:23409088

  18. Consequences of normalizing transcriptomic and genomic libraries of plant genomes using a duplex-specific nuclease and tetramethylammonium chloride.

    PubMed

    Matvienko, Marta; Kozik, Alexander; Froenicke, Lutz; Lavelle, Dean; Martineau, Belinda; Perroud, Bertrand; Michelmore, Richard

    2013-01-01

    Several applications of high throughput genome and transcriptome sequencing would benefit from a reduction of the high-copy-number sequences in the libraries being sequenced and analyzed, particularly when applied to species with large genomes. We adapted and analyzed the consequences of a method that utilizes a thermostable duplex-specific nuclease for reducing the high-copy components in transcriptomic and genomic libraries prior to sequencing. This reduces the time, cost, and computational effort of obtaining informative transcriptomic and genomic sequence data for both fully sequenced and non-sequenced genomes. It also reduces contamination from organellar DNA in preparations of nuclear DNA. Hybridization in the presence of 3 M tetramethylammonium chloride (TMAC), which equalizes the rates of hybridization of GC and AT nucleotide pairs, reduced the bias against sequences with high GC content. Consequences of this method on the reduction of high-copy and enrichment of low-copy sequences are reported for Arabidopsis and lettuce.

  19. A FORTRAN source library for quaternion algebra. Application to multicomponent seismic data

    NASA Astrophysics Data System (ADS)

    Benaïssa, A.; Benaïssa, Z.; Ouadfeul, S.

    2012-04-01

    The quaternions, named also hypercomplex numbers, constituted of a real part and three imaginary parts, allow a representation of multi-component physical signals in geophysics. In FORTRAN, the need for programming new applications and extend programs to quaternions requires to enhance capabilities of this language. In this study, we develop, in FORTRAN 95, a source library which provides functions and subroutines making development and maintenance of programs devoted to quaternions, equivalent to those developed for the complex plane. The systematic use of generic functions and generic operators: 1/ allows using FORTRAN statements and operators extended to quaternions without renaming them and 2/ makes use of this statements transparent to the specificity of quaternions. The portability of this library is insured by the standard FORTRAN 95 strict norm which is independent of operating systems (OS). The execution time of quaternion applications, sometimes crucial for huge data sets, depends, generally, of compilers optimizations by the use of in lining and parallelisation. To show the use of the library, Fourier transform of a real one dimensional quaternionic seismic signal is presented. Furthermore, a FORTRAN code, which computes the quaternionic singular values decomposition (QSVD), is developed using the proposed library and applied to wave separation in multicomponent vertical seismic profile (VSP) synthetic and real data. The extracted wavefields have been highly enhanced, compared to those obtained with median filter, due to QSVD which takes into account the correlation between the different components of the seismic signal. Taken in total, these results demonstrate that use of quaternions can bring a significant improvement for some processing on three or four components seismic data. Keywords: Quaternion - FORTRAN - Vectorial processing - Multicomponent signal - VSP - Fourier transform.

  20. Burnout and the Library Administrator: Carrier or Cure.

    ERIC Educational Resources Information Center

    Smith, Nathan M.; And Others

    1988-01-01

    Discussion of burnout among library personnel includes a susceptibility profile, indicators of burnout, and administrative contributors. Techniques by which administrators can reduce stress are suggested, including participative management; improved communications; staff development; informal staff gatherings; staff meetings; flexible work…

  1. Representations and evolutionary operators for the scheduling of pump operations in water distribution networks.

    PubMed

    López-Ibáñez, Manuel; Prasad, T Devi; Paechter, Ben

    2011-01-01

    Reducing the energy consumption of water distribution networks has never had more significance. The greatest energy savings can be obtained by carefully scheduling the operations of pumps. Schedules can be defined either implicitly, in terms of other elements of the network such as tank levels; or explicitly, by specifying the time during which each pump is on/off. The traditional representation of explicit schedules is a string of binary values with each bit representing pump on/off status during a particular time interval. In this paper, we formally define and analyze two new explicit representations based on time-controlled triggers, where the maximum number of pump switches is established beforehand and the schedule may contain fewer than the maximum number of switches. In these representations, a pump schedule is divided into a series of integers with each integer representing the number of hours for which a pump is active/inactive. This reduces the number of potential schedules compared to the binary representation, and allows the algorithm to operate on the feasible region of the search space. We propose evolutionary operators for these two new representations. The new representations and their corresponding operations are compared with the two most-used representations in pump scheduling, namely, binary representation and level-controlled triggers. A detailed statistical analysis of the results indicates which parameters have the greatest effect on the performance of evolutionary algorithms. The empirical results show that an evolutionary algorithm using the proposed representations is an improvement over the results obtained by a recent state of the art hybrid genetic algorithm for pump scheduling using level-controlled triggers.

  2. Academic Research Library as Broker in Addressing Interoperability Challenges for the Geosciences

    NASA Astrophysics Data System (ADS)

    Smith, P., II

    2015-12-01

    Data capture is an important process in the research lifecycle. Complete descriptive and representative information of the data or database is necessary during data collection whether in the field or in the research lab. The National Science Foundation's (NSF) Public Access Plan (2015) mandates the need for federally funded projects to make their research data more openly available. Developing, implementing, and integrating metadata workflows into to the research process of the data lifecycle facilitates improved data access while also addressing interoperability challenges for the geosciences such as data description and representation. Lack of metadata or data curation can contribute to (1) semantic, (2) ontology, and (3) data integration issues within and across disciplinary domains and projects. Some researchers of EarthCube funded projects have identified these issues as gaps. These gaps can contribute to interoperability data access, discovery, and integration issues between domain-specific and general data repositories. Academic Research Libraries have expertise in providing long-term discovery and access through the use of metadata standards and provision of access to research data, datasets, and publications via institutional repositories. Metadata crosswalks, open archival information systems (OAIS), trusted-repositories, data seal of approval, persistent URL, linking data, objects, resources, and publications in institutional repositories and digital content management systems are common components in the library discipline. These components contribute to a library perspective on data access and discovery that can benefit the geosciences. The USGS Community for Data Integration (CDI) has developed the Science Support Framework (SSF) for data management and integration within its community of practice for contribution to improved understanding of the Earth's physical and biological systems. The USGS CDI SSF can be used as a reference model to map to EarthCube Funded projects with academic research libraries facilitating the data and information assets components of the USGS CDI SSF via institutional repositories and/or digital content management. This session will explore the USGS CDI SSF for cross-discipline collaboration considerations from a library perspective.

  3. Method for making 2-electron response reduced density matrices approximately N-representable

    NASA Astrophysics Data System (ADS)

    Lanssens, Caitlin; Ayers, Paul W.; Van Neck, Dimitri; De Baerdemacker, Stijn; Gunst, Klaas; Bultinck, Patrick

    2018-02-01

    In methods like geminal-based approaches or coupled cluster that are solved using the projected Schrödinger equation, direct computation of the 2-electron reduced density matrix (2-RDM) is impractical and one falls back to a 2-RDM based on response theory. However, the 2-RDMs from response theory are not N-representable. That is, the response 2-RDM does not correspond to an actual physical N-electron wave function. We present a new algorithm for making these non-N-representable 2-RDMs approximately N-representable, i.e., it has the right symmetry and normalization and it fulfills the P-, Q-, and G-conditions. Next to an algorithm which can be applied to any 2-RDM, we have also developed a 2-RDM optimization procedure specifically for seniority-zero 2-RDMs. We aim to find the 2-RDM with the right properties which is the closest (in the sense of the Frobenius norm) to the non-N-representable 2-RDM by minimizing the square norm of the difference between this initial response 2-RDM and the targeted 2-RDM under the constraint that the trace is normalized and the 2-RDM, Q-matrix, and G-matrix are positive semidefinite, i.e., their eigenvalues are non-negative. Our method is suitable for fixing non-N-representable 2-RDMs which are close to being N-representable. Through the N-representability optimization algorithm we add a small correction to the initial 2-RDM such that it fulfills the most important N-representability conditions.

  4. TRADOC Library and Information Network (TRALINET)

    DTIC Science & Technology

    1979-03-01

    by the Library of Congress, Dewey materials that have beer photographically reduced Decimal , or any other classification scheme adopted in size for...sites at Forts Hood, TX; Gordon, GA; Monroe, VA; Knox, KY, and Leavenworth, KS. DTIC, formally Defense Documentation Center ( DDC ), serves as the primary...locally expanded subject schedules, whether schedules aic for Dewey , Library of Congress, etc., particularly in the are& of Military Arts and Sciences. 1 4

  5. Semantic e-Science: From Microformats to Models

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Freemantle, J. R.; Aldridge, K. D.

    2009-05-01

    A platform has been developed to transform semi-structured ASCII data into a representation based on the eXtensible Markup Language (XML). A subsequent transformation allows the XML-based representation to be rendered in the Resource Description Format (RDF). Editorial metadata, expressed as external annotations (via XML Pointer Language), also survives this transformation process (e.g., Lumb et al., http://dx.doi.org/10.1016/j.cageo.2008.03.009). Because the XML-to-RDF transformation uses XSLT (eXtensible Stylesheet Language Transformations), semantic microformats ultimately encode the scientific data (Lumb & Aldridge, http://dx.doi.org/10.1109/HPCS.2006.26). In building the relationship-centric representation in RDF, a Semantic Model of the scientific data is extracted. The systematic enhancement in the expressivity and richness of the scientific data results in representations of knowledge that are readily understood and manipulated by intelligent software agents. Thus scientists are able to draw upon various resources within and beyond their discipline to use in their scientific applications. Since the resulting Semantic Models are independent conceptualizations of the science itself, the representation of scientific knowledge and interaction with the same can stimulate insight from different perspectives. Using the Global Geodynamics Project (GGP) for the purpose of illustration, the introduction of GGP microformats enable a Semantic Model for the GGP that can be semantically queried (e.g., via SPARQL, http://www.w3.org/TR/rdf-sparql-query). Although the present implementation uses the Open Source Redland RDF Libraries (http://librdf.org/), the approach is generalizable to other platforms and to projects other than the GGP (e.g., Baker et al., Informatics and the 2007-2008 Electronic Geophysical Year, Eos Trans. Am. Geophys. Un., 89(48), 485-486, 2008).

  6. DeepMeSH: deep semantic representation for improving large-scale MeSH indexing

    PubMed Central

    Peng, Shengwen; You, Ronghui; Wang, Hongning; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-01-01

    Motivation: Medical Subject Headings (MeSH) indexing, which is to assign a set of MeSH main headings to citations, is crucial for many important tasks in biomedical text mining and information retrieval. Large-scale MeSH indexing has two challenging aspects: the citation side and MeSH side. For the citation side, all existing methods, including Medical Text Indexer (MTI) by National Library of Medicine and the state-of-the-art method, MeSHLabeler, deal with text by bag-of-words, which cannot capture semantic and context-dependent information well. Methods: We propose DeepMeSH that incorporates deep semantic information for large-scale MeSH indexing. It addresses the two challenges in both citation and MeSH sides. The citation side challenge is solved by a new deep semantic representation, D2V-TFIDF, which concatenates both sparse and dense semantic representations. The MeSH side challenge is solved by using the ‘learning to rank’ framework of MeSHLabeler, which integrates various types of evidence generated from the new semantic representation. Results: DeepMeSH achieved a Micro F-measure of 0.6323, 2% higher than 0.6218 of MeSHLabeler and 12% higher than 0.5637 of MTI, for BioASQ3 challenge data with 6000 citations. Availability and Implementation: The software is available upon request. Contact: zhusf@fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307646

  7. Mapping of the Academic Production at Science and Mathematics Education Postgraduate about the Theory of Social Representations

    NASA Astrophysics Data System (ADS)

    Barbosa, José Isnaldo de Lima; Curi, Edda; Voelzke, Marcos Rincon

    2016-12-01

    The theory of social representations, appeared in 1961, arrived in Brazil in 1982, and since then has advanced significantly, been used in various areas of knowledge, assumed a significant role also in education. Thus, the aim of this article is to make a mapping of theses and dissertations in post-graduation programs, whose basic area is the Teaching of Science and Mathematics, and used as the theoretical foundation the theory of social representations, highlighted the social groups that are subject of this research. This is a documentary research, and lifting to the "state of knowledge" of two theses and 36 dissertations, defended in ten of the 37 existing programs in the basic area of Science and Mathematics Teaching, with the delimitation of academic masters and doctorates. The data collection was executed on December 2014 and was placed in the virtual libraries of these masters and doctoral programs, these elements were analysed according to some categories established after reading the summaries of the work, and the results showed that the theory of social representations has been used as a theoretical framework in various research groups, established in postgraduate programs in this area, for almost the entire Brazil. As for the subjects involved in this research, three groups were detected, which are: Middle school and high school students, teachers who are in full swing, spread from the early years to higher education, and undergraduates in Science and Mathematics.

  8. Towards the construction of high-quality mutagenesis libraries.

    PubMed

    Li, Heng; Li, Jing; Jin, Ruinan; Chen, Wei; Liang, Chaoning; Wu, Jieyuan; Jin, Jian-Ming; Tang, Shuang-Yan

    2018-07-01

    To improve the quality of mutagenesis libraries in directed evolution strategy. In the process of library transformation, transformants which have been shown to take up more than one plasmid might constitute more than 20% of the constructed library, thereby extensively impairing the quality of the library. We propose a practical transformation method to prevent the occurrence of multiple-plasmid transformants while maintaining high transformation efficiency. A visual library model containing plasmids expressing different fluorescent proteins was used. Multiple-plasmid transformants can be reduced through optimizing plasmid DNA amount used for transformation based on the positive correlation between the occurrence frequency of multiple-plasmid transformants and the logarithmic ratio of plasmid molecules to competent cells. This method provides a simple solution for a seemingly common but often neglected problem, and should be valuable for improving the quality of mutagenesis libraries to enhance the efficiency of directed evolution strategies.

  9. Adiabatic quantum-flux-parametron cell library adopting minimalist design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeuchi, Naoki, E-mail: takeuchi-naoki-kx@ynu.jp; Yamanashi, Yuki; Yoshikawa, Nobuyuki

    We herein build an adiabatic quantum-flux-parametron (AQFP) cell library adopting minimalist design and a symmetric layout. In the proposed minimalist design, every logic cell is designed by arraying four types of building block cells: buffer, NOT, constant, and branch cells. Therefore, minimalist design enables us to effectively build and customize an AQFP cell library. The symmetric layout reduces unwanted parasitic magnetic coupling and ensures a large mutual inductance in an output transformer, which enables very long wiring between logic cells. We design and fabricate several logic circuits using the minimal AQFP cell library so as to test logic cells inmore » the library. Moreover, we experimentally investigate the maximum wiring length between logic cells. Finally, we present an experimental demonstration of an 8-bit carry look-ahead adder designed using the minimal AQFP cell library and demonstrate that the proposed cell library is sufficiently robust to realize large-scale digital circuits.« less

  10. Adiabatic quantum-flux-parametron cell library adopting minimalist design

    NASA Astrophysics Data System (ADS)

    Takeuchi, Naoki; Yamanashi, Yuki; Yoshikawa, Nobuyuki

    2015-05-01

    We herein build an adiabatic quantum-flux-parametron (AQFP) cell library adopting minimalist design and a symmetric layout. In the proposed minimalist design, every logic cell is designed by arraying four types of building block cells: buffer, NOT, constant, and branch cells. Therefore, minimalist design enables us to effectively build and customize an AQFP cell library. The symmetric layout reduces unwanted parasitic magnetic coupling and ensures a large mutual inductance in an output transformer, which enables very long wiring between logic cells. We design and fabricate several logic circuits using the minimal AQFP cell library so as to test logic cells in the library. Moreover, we experimentally investigate the maximum wiring length between logic cells. Finally, we present an experimental demonstration of an 8-bit carry look-ahead adder designed using the minimal AQFP cell library and demonstrate that the proposed cell library is sufficiently robust to realize large-scale digital circuits.

  11. Reducing Noise in a College Library.

    ERIC Educational Resources Information Center

    Luyben, Paul D.; And Others

    1981-01-01

    Discusses an experiment on controlling library noise by rearrangement of furniture groupings and the separation of existing clusters of furniture. While electromechanical tests showed no significant differences, user measures indicated more acceptable noise levels. There are numerous illustrations and 30 references. (RAA)

  12. ABJM Wilson loops in arbitrary representations

    NASA Astrophysics Data System (ADS)

    Hatsuda, Yasuyuki; Honda, Masazumi; Moriyama, Sanefumi; Okuyama, Kazumi

    2013-10-01

    We study vacuum expectation values (VEVs) of circular half BPS Wilson loops in arbitrary representations in ABJM theory. We find that those in hook representations are reduced to elementary integrations thanks to the Fermi gas formalism, which are accessible from the numerical studies similar to the partition function in the previous studies. For non-hook representations, we show that the VEVs in the grand canonical formalism can be exactly expressed as determinants of those in the hook representations. Using these facts, we can study the instanton effects of the VEVs in various representations. Our results are consistent with the worldsheet instanton effects studied from the topological string and a prescription to include the membrane instanton effects by shifting the chemical potential, which has been successful for the partition function.

  13. owlcpp: a C++ library for working with OWL ontologies.

    PubMed

    Levin, Mikhail K; Cowell, Lindsay G

    2015-01-01

    The increasing use of ontologies highlights the need for a library for working with ontologies that is efficient, accessible from various programming languages, and compatible with common computational platforms. We developed owlcpp, a library for storing and searching RDF triples, parsing RDF/XML documents, converting triples into OWL axioms, and reasoning. The library is written in ISO-compliant C++ to facilitate efficiency, portability, and accessibility from other programming languages. Internally, owlcpp uses the Raptor RDF Syntax library for parsing RDF/XML and the FaCT++ library for reasoning. The current version of owlcpp is supported under Linux, OSX, and Windows platforms and provides an API for Python. The results of our evaluation show that, compared to other commonly used libraries, owlcpp is significantly more efficient in terms of memory usage and searching RDF triple stores. owlcpp performs strict parsing and detects errors ignored by other libraries, thus reducing the possibility of incorrect semantic interpretation of ontologies. owlcpp is available at http://owl-cpp.sf.net/ under the Boost Software License, Version 1.0.

  14. Changes in deep-sea carbonate-hosted microbial communities associated with high and low methane flux

    NASA Astrophysics Data System (ADS)

    Case, D. H.; Steele, J. A.; Chadwick, G.; Mendoza, G. F.; Levin, L. A.; Orphan, V. J.

    2012-12-01

    Methane seeps on continental shelves are rich in authigenic carbonates built of methane-derived carbon. These authigenic carbonates are home to micro- and macroscopic communities whose compositions are thus far poorly constrained but are known to broadly depend on local methane flux. The formation of authigenic carbonates is itself a result of microbial metabolic activity, as associations of anaerobic methane oxidizing archaea (ANME) and sulfate reducing bacteria (SRB) in the sediment subsurface increase both dissolved inorganic carbon (DIC) and alkalinity in pore waters. This 1:1 increase in DIC and alkalinity promotes the precipitation of authigenic carbonates. In this study, we performed in situ manipulations to test the response of micro- and macrofaunal communities to a change in methane flux. Methane-derived authigenic carbonates from two locations at Hydrate Ridge, OR, USA (depth range 595-604 mbsl), were transplanted from "active" cold seep sites (high methane flux) to "inactive" background sites (low methane flux), and vise versa, for one year. Community diversity surveys using T-RFLP and 16S rRNA clone libraries revealed how both bacterial and archaeal assemblages respond to this change in local environment, specifically demonstrating reproducible shifts in different ANME groups (ANME-1 vs. ANME-2). Animal assemblage composition also shifted during transplantation; gastropod representation increased (relative to control rocks) when substrates were moved from inactive to active sites and polychaete, crustacean and echinoderm representation increased when substrates were moved from active to inactive sites. Combined with organic and inorganic carbon δ13C measurements and mineralogy, this unique in situ experiment demonstrates that authigenic carbonates are viable habitats, hosting microbial and macrofaunal communities capable of responding to changes in external environment over relatively short time periods.

  15. FragBag, an accurate representation of protein structure, retrieves structural neighbors from the entire PDB quickly and accurately.

    PubMed

    Budowski-Tal, Inbal; Nov, Yuval; Kolodny, Rachel

    2010-02-23

    Fast identification of protein structures that are similar to a specified query structure in the entire Protein Data Bank (PDB) is fundamental in structure and function prediction. We present FragBag: An ultrafast and accurate method for comparing protein structures. We describe a protein structure by the collection of its overlapping short contiguous backbone segments, and discretize this set using a library of fragments. Then, we succinctly represent the protein as a "bags-of-fragments"-a vector that counts the number of occurrences of each fragment-and measure the similarity between two structures by the similarity between their vectors. Our representation has two additional benefits: (i) it can be used to construct an inverted index, for implementing a fast structural search engine of the entire PDB, and (ii) one can specify a structure as a collection of substructures, without combining them into a single structure; this is valuable for structure prediction, when there are reliable predictions only of parts of the protein. We use receiver operating characteristic curve analysis to quantify the success of FragBag in identifying neighbor candidate sets in a dataset of over 2,900 structures. The gold standard is the set of neighbors found by six state of the art structural aligners. Our best FragBag library finds more accurate candidate sets than the three other filter methods: The SGM, PRIDE, and a method by Zotenko et al. More interestingly, FragBag performs on a par with the computationally expensive, yet highly trusted structural aligners STRUCTAL and CE.

  16. 3D geospatial visualizations: Animation and motion effects on spatial objects

    NASA Astrophysics Data System (ADS)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  17. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  18. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  19. Brain Friendly School Libraries

    ERIC Educational Resources Information Center

    Sykes, Judith Anne

    2006-01-01

    This title gives concrete practical examples of how to align school library programs and instructional practice with the six key concepts of brain-compatible learning: increasing input to the brain; increasing experiential data; multiple source feedback; reducing threat; involving students in learning decision making; and interdisciplinary unit…

  20. Libpsht - algorithms for efficient spherical harmonic transforms

    NASA Astrophysics Data System (ADS)

    Reinecke, M.

    2011-02-01

    Libpsht (or "library for performant spherical harmonic transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports both transforms of scalars and spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP, and ECP). It will take advantage of hardware features such as multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time, as well as memory consumption) currently being used in the astronomical community. The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc. Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2 and can be downloaded from .

  1. Libpsht: Algorithms for Efficient Spherical Harmonic Transforms

    NASA Astrophysics Data System (ADS)

    Reinecke, Martin

    2010-10-01

    Libpsht (or "library for Performing Spherical Harmonic Transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports transforms of scalars as well as spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP and ECP). It will take advantage of hardware features like multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time as well as memory consumption) currently being used in the astronomical community. The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc. Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2. Development on this project has ended; its successor is libsharp (ascl:1402.033).

  2. Occupant behavior models: A critical review of implementation and representation approaches in building performance simulation programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia

    Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less

  3. Occupant behavior models: A critical review of implementation and representation approaches in building performance simulation programs

    DOE PAGES

    Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia; ...

    2017-07-27

    Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less

  4. Large-scale DNA Barcode Library Generation for Biomolecule Identification in High-throughput Screens.

    PubMed

    Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio

    2017-10-24

    High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.

  5. Next-generation sequencing library construction on a surface.

    PubMed

    Feng, Kuan; Costa, Justin; Edwards, Jeremy S

    2018-05-30

    Next-generation sequencing (NGS) has revolutionized almost all fields of biology, agriculture and medicine, and is widely utilized to analyse genetic variation. Over the past decade, the NGS pipeline has been steadily improved, and the entire process is currently relatively straightforward. However, NGS instrumentation still requires upfront library preparation, which can be a laborious process, requiring significant hands-on time. Herein, we present a simple but robust approach to streamline library preparation by utilizing surface bound transposases to construct DNA libraries directly on a flowcell surface. The surface bound transposases directly fragment genomic DNA while simultaneously attaching the library molecules to the flowcell. We sequenced and analysed a Drosophila genome library generated by this surface tagmentation approach, and we showed that our surface bound library quality was comparable to the quality of the library from a commercial kit. In addition to the time and cost savings, our approach does not require PCR amplification of the library, which eliminates potential problems associated with PCR duplicates. We described the first study to construct libraries directly on a flowcell. We believe our technique could be incorporated into the existing Illumina sequencing pipeline to simplify the workflow, reduce costs, and improve data quality.

  6. Refactoring DIRT

    NASA Astrophysics Data System (ADS)

    Amarnath, N. S.; Pound, M. W.; Wolfire, M. G.

    The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS, located at http://dustem.astro.umd.edu) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 4 years. DIRT uses results from a number of numerical models of astrophysical processes, and has an AWT based user interface. DIRT has been refactored to decouple data representation from plotting and curve fitting. This makes it easier to add new kinds of astrophysical models, use the plotter in other applications, migrate the user interface to Swing components, and modify the user interface to add functionality (for example, SIRTF tools). DIRT is now an extension of two generic libraries, one of which manages data representation and caching, and the second of which manages plotting and curve fitting. This project is an example of refactoring with no impact on user interface, so the existing user community was not affected.

  7. Developing Historic Building Information Modelling Guidelines and Procedures for Architectural Heritage in Ireland

    NASA Astrophysics Data System (ADS)

    Murphy, M.; Corns, A.; Cahill, J.; Eliashvili, K.; Chenau, A.; Pybus, C.; Shaw, R.; Devlin, G.; Deevy, A.; Truong-Hong, L.

    2017-08-01

    Cultural heritage researchers have recently begun applying Building Information Modelling (BIM) to historic buildings. The model is comprised of intelligent objects with semantic attributes which represent the elements of a building structure and are organised within a 3D virtual environment. Case studies in Ireland are used to test and develop the suitable systems for (a) data capture/digital surveying/processing (b) developing library of architectural components and (c) mapping these architectural components onto the laser scan or digital survey to relate the intelligent virtual representation of a historic structure (HBIM). While BIM platforms have the potential to create a virtual and intelligent representation of a building, its full exploitation and use is restricted to narrow set of expert users with access to costly hardware, software and skills. The testing of open BIM approaches in particular IFCs and the use of game engine platforms is a fundamental component for developing much wider dissemination. The semantically enriched model can be transferred into a WEB based game engine platform.

  8. From GCode to STL: Reconstruct Models from 3D Printing as a Service

    NASA Astrophysics Data System (ADS)

    Baumann, Felix W.; Schuermann, Martin; Odefey, Ulrich; Pfeil, Markus

    2017-12-01

    The authors present a method to reverse engineer 3D printer specific machine instructions (GCode) to a point cloud representation and then a STL (Stereolithography) file format. GCode is a machine code that is used for 3D printing among other applications, such as CNC routers. Such code files contain instructions for the 3D printer to move and control its actuator, in case of Fused Deposition Modeling (FDM), the printhead that extrudes semi-molten plastics. The reverse engineering method presented here is based on the digital simulation of the extrusion process of FDM type 3D printing. The reconstructed models and pointclouds do not accommodate for hollow structures, such as holes or cavities. The implementation is performed in Python and relies on open source software and libraries, such as Matplotlib and OpenCV. The reconstruction is performed on the model’s extrusion boundary and considers mechanical imprecision. The complete reconstruction mechanism is available as a RESTful (Representational State Transfer) Web service.

  9. Generalized Pauli constraints in reduced density matrix functional theory.

    PubMed

    Theophilou, Iris; Lathiotakis, Nektarios N; Marques, Miguel A L; Helbig, Nicole

    2015-04-21

    Functionals of the one-body reduced density matrix (1-RDM) are routinely minimized under Coleman's ensemble N-representability conditions. Recently, the topic of pure-state N-representability conditions, also known as generalized Pauli constraints, received increased attention following the discovery of a systematic way to derive them for any number of electrons and any finite dimensionality of the Hilbert space. The target of this work is to assess the potential impact of the enforcement of the pure-state conditions on the results of reduced density-matrix functional theory calculations. In particular, we examine whether the standard minimization of typical 1-RDM functionals under the ensemble N-representability conditions violates the pure-state conditions for prototype 3-electron systems. We also enforce the pure-state conditions, in addition to the ensemble ones, for the same systems and functionals and compare the correlation energies and optimal occupation numbers with those obtained by the enforcement of the ensemble conditions alone.

  10. A simplified formalism of the algebra of partially transposed permutation operators with applications

    NASA Astrophysics Data System (ADS)

    Mozrzymas, Marek; Studziński, Michał; Horodecki, Michał

    2018-03-01

    Herein we continue the study of the representation theory of the algebra of permutation operators acting on the n -fold tensor product space, partially transposed on the last subsystem. We develop the concept of partially reduced irreducible representations, which allows us to significantly simplify previously proved theorems and, most importantly, derive new results for irreducible representations of the mentioned algebra. In our analysis we are able to reduce the complexity of the central expressions by getting rid of sums over all permutations from the symmetric group, obtaining equations which are much more handy in practical applications. We also find relatively simple matrix representations for the generators of the underlying algebra. The obtained simplifications and developments are applied to derive the characteristics of a deterministic port-based teleportation scheme written purely in terms of irreducible representations of the studied algebra. We solve an eigenproblem for the generators of the algebra, which is the first step towards a hybrid port-based teleportation scheme and gives us new proofs of the asymptotic behaviour of teleportation fidelity. We also show a connection between the density operator characterising port-based teleportation and a particular matrix composed of an irreducible representation of the symmetric group, which encodes properties of the investigated algebra.

  11. Study of college library appealing information system: A case of Longyan University

    NASA Astrophysics Data System (ADS)

    Liao, Jin-Hui

    2014-10-01

    The complaints from the readers at university libraries mainly focus on the aspects of service attitude, quality of service, reading environment, the management system, etc. Librarians should realize that reader complaints can actually promote the role of the library service and communicate with readers who complain in a friendly manner. In addition, the Longyan University library should establish an internal management system, improve library hardware facilities, improve the quality of librarians and optimize the knowledge structure of librarians, so as to improve the quality of the service for readers and reduce complaints. Based on this point, we have designed an appealing information system in cryptography machine basis, to provide readers online, remote and anonymous complaint functions.

  12. Evolving the Living With a Star Data System Definition

    NASA Astrophysics Data System (ADS)

    Otranto, J. F.; Dijoseph, M.

    2003-12-01

    NASA's Living With a Star (LWS) Program is a space weather-focused and applications-driven research program. The LWS Program is soliciting input from the solar, space physics, space weather, and climate science communities to develop a system that enables access to science data associated with these disciplines, and advances the development of discipline and interdisciplinary findings. The LWS Program will implement a data system that builds upon the existing and planned data capture, processing, and storage components put in place by individual spacecraft missions and also inter-project data management systems, including active and deep archives, and multi-mission data repositories. It is technically feasible for the LWS Program to integrate data from a broad set of resources, assuming they are either publicly accessible or allow access by permission. The LWS Program data system will work in coordination with spacecraft mission data systems and science data repositories, integrating their holdings using a common metadata representation. This common representation relies on a robust metadata definition that provides journalistic and technical data descriptions, plus linkages to supporting data products and tools. The LWS Program intends to become an enabling resource to PIs, interdisciplinary scientists, researchers, and students facilitating both access to a broad collection of science data, as well as the necessary supporting components to understand and make productive use of these data. For the LWS Program to represent science data that are physically distributed across various ground system elements, information will be collected about these distributed data products through a series of LWS Program-created agents. These agents will be customized to interface or interact with each one of these data systems, collect information, and forward any new metadata records to a LWS Program-developed metadata library. A populated LWS metadata library will function as a single point-of-contact that serves the entire science community as a first stop for data availability, whether or not science data are physically stored in an LWS-operated repository. Further, this metadata library will provide the user access to information for understanding these data including descriptions of the associated spacecraft and instrument, data format, calibration and operations issues, links to ancillary and correlative data products, links to processing tools and models associated with these data, and any corresponding findings produced using these data. The LWS may also support an active archive for solar, space physics, space weather, and climate data when these data would otherwise be discarded or archived off-line. This archive could potentially serve also as a data storage backup facility for LWS missions. The plan for the LWS Program metadata library is developed based upon input received from the solar and geospace science communities; the library's architecture is based on existing systems developed for serving science metadata. The LWS Program continues to seek constructive input from the science community, examples of both successes and failures in dealing with science data systems, and insights regarding the obstacles between the current state-of-the-practice and this vision for the LWS Program metadata library.

  13. Visual representations in science education: The influence of prior knowledge and cognitive load theory on instructional design principles

    NASA Astrophysics Data System (ADS)

    Cook, Michelle Patrick

    2006-11-01

    Visual representations are essential for communicating ideas in the science classroom; however, the design of such representations is not always beneficial for learners. This paper presents instructional design considerations providing empirical evidence and integrating theoretical concepts related to cognitive load. Learners have a limited working memory, and instructional representations should be designed with the goal of reducing unnecessary cognitive load. However, cognitive architecture alone is not the only factor to be considered; individual differences, especially prior knowledge, are critical in determining what impact a visual representation will have on learners' cognitive structures and processes. Prior knowledge can determine the ease with which learners can perceive and interpret visual representations in working memory. Although a long tradition of research has compared experts and novices, more research is necessary to fully explore the expert-novice continuum and maximize the potential of visual representations.

  14. Alternative transitions between existing representations in multi-scale maps

    NASA Astrophysics Data System (ADS)

    Dumont, Marion; Touya, Guillaume; Duchêne, Cécile

    2018-05-01

    Map users may have issues to achieve multi-scale navigation tasks, as cartographic objects may have various representations across scales. We assume that adding intermediate representations could be one way to reduce the differences between existing representations, and to ease the transitions across scales. We consider an existing multiscale map on the scale range from 1 : 25k to 1 : 100k scales. Based on hypotheses about intermediate representations design, we build custom multi-scale maps with alternative transitions. We will conduct in a next future a user evaluation to compare the efficiency of these alternative maps for multi-scale navigation. This paper discusses the hypotheses and production process of these alternative maps.

  15. Increasing the use of 'smart' pump drug libraries by nurses: a continuous quality improvement project.

    PubMed

    Harding, Andrew D

    2012-01-01

    The use of infusion pumps that incorporate "smart" technology (smart pumps) can reduce the risks associated with receiving IV therapies. Smart pump technology incorporates safeguards such as a list of high-alert medications, soft and hard dosage limits, and a drug library that can be tailored to specific patient care areas. Its use can help to improve patient safety and to avoid potentially catastrophic harm associated with medication errors. But when one independent community hospital in Massachusetts switched from older mechanical pumps to smart pumps, it neglected to assign an "owner" to oversee the implementation process. One result was that nurses were using the smart pump library for only 37% of all infusions.To increase pump library usage percentage-thereby reducing the risks associated with infusion and improving patient safety-the hospital undertook a continuous quality improvement project over a four-month period in 2009. With the involvement of direct care nurses, and using quantitative data available from the smart pump software, the nursing quality and pharmacy quality teams identified ways to improve pump and pump library use. A secondary goal was to calculate the hospital's return on investment for the purchase of the smart pumps. Several interventions were developed and, on the first of each month, implemented. By the end of the project, pump library usage had nearly doubled; and the hospital had completely recouped its initial investment.

  16. Toward An Unstructured Mesh Database

    NASA Astrophysics Data System (ADS)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi-incidence relationships. We instrument ImG model with sets of optional and application-specific constraints which can be used to check validity of meshes for a specific class of object such as manifold, pseudo-manifold, and simplicial manifold. We conducted experiments to measure the performance of the graph database solution in processing mesh queries and compare it with GrAL mesh library and PostgreSQL database on synthetic and real mesh datasets. The experiments show that each system perform well on specific types of mesh queries, e.g., graph databases perform well on global path-intensive queries. In the future, we investigate database operations for the ImG model and design a mesh query language.

  17. Analysis of the Library Situation in Latin America 1969.

    ERIC Educational Resources Information Center

    Organization of American States, Washington, DC. Library Development Program.

    The modern library is an institution that supplies many information services by efficiently organizing universal knowledge that has been reduced to the printed word. Unfortunately, the Latin American countries have not developed centralized services and programs with respect to bibliography, cataloging, exchange, reprography, production of library…

  18. Developing Crash-Resistant Electronic Services.

    ERIC Educational Resources Information Center

    Almquist, Arne J.

    1997-01-01

    Libraries' dependence on computers can lead to frustrations for patrons and staff during downtime caused by computer system failures. Advice for reducing the number of crashes is provided, focusing on improved training for systems staff, better management of library systems, and the development of computer systems using quality components which…

  19. Investigation of Techniques to Reduce Electrostatic Discharge Susceptibility of Hermetically Sealed EEDS

    DTIC Science & Technology

    1975-07-03

    explosive | output requirement. The substitution of RD 1333 lead azide for dextrinated 1 (Figure 8c) did not improve the output, but the modified charge...Pennsylvania 19112 Library Director Army Material Systems Analysis Agency Aberdeen Proving Ground Aberdeen, Maryland 21005 Technical Library

  20. The Study on Collaborative Manufacturing Platform Based on Agent

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-yan; Qu, Zheng-geng

    To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.

  1. 3Dmol.js: molecular visualization with WebGL.

    PubMed

    Rego, Nicholas; Koes, David

    2015-04-15

    3Dmol.js is a modern, object-oriented JavaScript library that uses the latest web technologies to provide interactive, hardware-accelerated three-dimensional representations of molecular data without the need to install browser plugins or Java. 3Dmol.js provides a full featured API for developers as well as a straightforward declarative interface that lets users easily share and embed molecular data in websites. 3Dmol.js is distributed under the permissive BSD open source license. Source code and documentation can be found at http://3Dmol.csb.pitt.edu dkoes@pitt.edu. © The Author 2014. Published by Oxford University Press.

  2. SimITK: visual programming of the ITK image-processing library within Simulink.

    PubMed

    Dickinson, Andrew W L; Abolmaesumi, Purang; Gobbi, David G; Mousavi, Parvin

    2014-04-01

    The Insight Segmentation and Registration Toolkit (ITK) is a software library used for image analysis, visualization, and image-guided surgery applications. ITK is a collection of C++ classes that poses the challenge of a steep learning curve should the user not have appropriate C++ programming experience. To remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within Simulink, the visual programming environment of MATLAB, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily templated C++ nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective data types and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around an ITK class which is capable of resolving the ITK data types to native Simulink data types. Part of the challenge surrounding this implementation involves automatically capturing and storing the pertinent class information that need to be refined from an initial state prior to being reflected within the final block representation. The primary result from the SimITK wrapping procedure is multiple Simulink block libraries. From these libraries, blocks are selected and interconnected to demonstrate two examples: a 3D segmentation workflow and a 3D multimodal registration workflow. Compared to their pure-code equivalents, the workflows highlight ITK usability through an alternative visual interpretation of the code that abstracts away potentially confusing technicalities.

  3. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed

    O'Neill, M A; Hilgetag, C C

    2001-08-29

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement.

  4. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed Central

    O'Neill, M A; Hilgetag, C C

    2001-01-01

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement. PMID:11545702

  5. BrAD-seq: Breath Adapter Directional sequencing: a streamlined, ultra-simple and fast library preparation protocol for strand specific mRNA library construction.

    PubMed

    Townsley, Brad T; Covington, Michael F; Ichihashi, Yasunori; Zumstein, Kristina; Sinha, Neelima R

    2015-01-01

    Next Generation Sequencing (NGS) is driving rapid advancement in biological understanding and RNA-sequencing (RNA-seq) has become an indispensable tool for biology and medicine. There is a growing need for access to these technologies although preparation of NGS libraries remains a bottleneck to wider adoption. Here we report a novel method for the production of strand specific RNA-seq libraries utilizing the terminal breathing of double-stranded cDNA to capture and incorporate a sequencing adapter. Breath Adapter Directional sequencing (BrAD-seq) reduces sample handling and requires far fewer enzymatic steps than most available methods to produce high quality strand-specific RNA-seq libraries. The method we present is optimized for 3-prime Digital Gene Expression (DGE) libraries and can easily extend to full transcript coverage shotgun (SHO) type strand-specific libraries and is modularized to accommodate a diversity of RNA and DNA input materials. BrAD-seq offers a highly streamlined and inexpensive option for RNA-seq libraries.

  6. Research of Uncertainty Reasoning in Pineapple Disease Identification System

    NASA Astrophysics Data System (ADS)

    Liu, Liqun; Fan, Haifeng

    In order to deal with the uncertainty of evidences mostly existing in pineapple disease identification system, a reasoning model based on evidence credibility factor was established. The uncertainty reasoning method is discussed,including: uncertain representation of knowledge, uncertain representation of rules, uncertain representation of multi-evidences and update of reasoning rules. The reasoning can fully reflect the uncertainty in disease identification and reduce the influence of subjective factors on the accuracy of the system.

  7. Indoor air pollution and preventions in college libraries

    NASA Astrophysics Data System (ADS)

    Yang, Zengzhang

    2017-05-01

    The college library is a place where it gets the comparatively high density of students often staying long time with it. Therefore, the indoor air quality will affect directly reading effect and physical health of teachers and students in colleges and universities. The paper analyzes the influenced factors in indoor air pollution of the library from the selection of green-environmental decorating materials and furniture, good ventilation maintaining, electromagnetic radiation reducing, regular disinfection, indoor green building and awareness of health and environmental protection strengthening etc. six aspects to put forward the ideas for preventions of indoor air pollution and construction of the green low-carbon library.

  8. Representation control increases task efficiency in complex graphical representations.

    PubMed

    Moritz, Julia; Meyerhoff, Hauke S; Meyer-Dernbecher, Claudia; Schwan, Stephan

    2018-01-01

    In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients.

  9. Representation control increases task efficiency in complex graphical representations

    PubMed Central

    Meyerhoff, Hauke S.; Meyer-Dernbecher, Claudia; Schwan, Stephan

    2018-01-01

    In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients. PMID:29698443

  10. Technostress: What Is It? How Can You Learn to Live with It?

    ERIC Educational Resources Information Center

    Smallwood, Carol

    1996-01-01

    Provides suggestions that can reduce stress for library personnel. Discusses strategic planning tasks, coping with patrons, releasing hostile feelings, recognizing no-win situations, keeping in good health, observing/changing personal habits, alternating mental and physical jobs, creating a library policy, selecting and arranging computer…

  11. The Frugal Librarian: Thriving in Tough Economic Times

    ERIC Educational Resources Information Center

    Smallwood, Carol, Ed.

    2011-01-01

    Fewer employees, shorter hours, diminished collection budgets, reduced programs and services--all at a time of record library usage. In this book, library expert Carol Smallwood demonstrates that despite the obvious downsides, the necessity of doing business differently can be positive, leading to partnering, sharing, and innovating. This…

  12. Phage selection of peptide "microantibodies".

    PubMed

    Fujiwara, Daisuke; Fujii, Ikuo

    2013-01-01

    A bioactive peptide capable of inhibiting protein-protein interactions has the potential to be a molecular tool for biological studies and a therapeutic by disrupting aberrant interactions involved in diseases. We have developed combinatorial libraries of peptides with helix-loop-helix structure, from which the isolated peptides have the constrained structure to reduce entropy costs in binding, resulting in high binding affinities for target molecules. Previously, we designed a de novo peptide of helix-loop-helix structure that we termed a "microantibody." Using the microantibody as a library scaffold, we have constructed a phage-display library to successfully isolate molecular-targeting peptides against a cytokine receptor (granulocyte colony-stimulating factor receptor), a protein kinase (Aurora-A), and a ganglioside (GM1). Protocols in this article describe a general procedure for the library construction and the library screening.

  13. Composing compound libraries for hit discovery--rationality-driven preselection or random choice by structural diversity?

    PubMed

    Weidel, Elisabeth; Negri, Matthias; Empting, Martin; Hinsberger, Stefan; Hartmann, Rolf W

    2014-01-01

    In order to identify new scaffolds for drug discovery, surface plasmon resonance is frequently used to screen structurally diverse libraries. Usually, hit rates are low and identification processes are time consuming. Hence, approaches which improve hit rates and, thus, reduce the library size are required. In this work, we studied three often used strategies for their applicability to identify inhibitors of PqsD. In two of them, target-specific aspects like inhibition of a homologous protein or predicted binding determined by virtual screening were used for compound preselection. Finally, a fragment library, covering a large chemical space, was screened and served as comparison. Indeed, higher hit rates were observed for methods employing preselected libraries indicating that target-oriented compound selection provides a time-effective alternative.

  14. The impact of social inequalities on children's knowledge and representation of health and cancer.

    PubMed

    Régnier Denois, Véronique; Bourmaud, Aurelie; Nekaa, Mabrouk; Bezzaz, Céline; Bousser, Véronique; Kalecinski, Julie; Dumesnil, Julia; Tinquaut, Fabien; Berger, Dominique; Chauvin, Franck

    2018-05-28

    Reducing inequalities in the field of cancer involves studying the knowledge and mental representations of cancer among children. A qualitative study was conducted on 191 children aged 9 to 12 using the "write and draw" technique to get spontaneous mental representations of "healthy things", "unhealthy things" and "cancer". We grouped the voluntary schools according to two deprivation levels. In response to the request to "write or draw anything you think keeps you healthy", the main responses categories were physical activity, healthy food and basic needs. Smoking, drinking alcohol, sedentary lifestyles/lack of sport were identified as "unhealthy". The first theme associated with "cancer" is the "cancer site" implying children have a segmented perception of cancer. Deprived children have radically different views about the key items representing cancer: they are more likely to believe the illness is systematically deadly. They are less likely to believe it is a treatable illness. They are less likely to associate cancer with risky behaviors, particularly alcohol consumption. Social inequalities affect representations of cancer and health literacy from early childhood. Prevention programs taking into account these representations need to be introduced at school. What is Known: • Social inequalities for cancer mortality are observed in all European countries and are particularly pronounced in France. • Reducing these inequalities in prevention programs implies studying the knowledge and mental representations of cancer among children. What is New: • This study identified representations of cancer in young children according to social level. • At age 9, children living in deprived areas are less able to produce content in discussions about cancer and have narrower mental representations and a more fatalistic view.

  15. Consistent maximum entropy representations of pipe flow networks

    NASA Astrophysics Data System (ADS)

    Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael

    2017-06-01

    The maximum entropy method is used to predict flows on water distribution networks. This analysis extends the water distribution network formulation of Waldrip et al. (2016) Journal of Hydraulic Engineering (ASCE), by the use of a continuous relative entropy defined on a reduced parameter set. This reduction in the parameters that the entropy is defined over ensures consistency between different representations of the same network. The performance of the proposed reduced parameter method is demonstrated with a one-loop network case study.

  16. Constructing high complexity synthetic libraries of long ORFs using in vitro selection

    NASA Technical Reports Server (NTRS)

    Cho, G.; Keefe, A. D.; Liu, R.; Wilson, D. S.; Szostak, J. W.

    2000-01-01

    We present a method that can significantly increase the complexity of protein libraries used for in vitro or in vivo protein selection experiments. Protein libraries are often encoded by chemically synthesized DNA, in which part of the open reading frame is randomized. There are, however, major obstacles associated with the chemical synthesis of long open reading frames, especially those containing random segments. Insertions and deletions that occur during chemical synthesis cause frameshifts, and stop codons in the random region will cause premature termination. These problems can together greatly reduce the number of full-length synthetic genes in the library. We describe a strategy in which smaller segments of the synthetic open reading frame are selected in vitro using mRNA display for the absence of frameshifts and stop codons. These smaller segments are then ligated together to form combinatorial libraries of long uninterrupted open reading frames. This process can increase the number of full-length open reading frames in libraries by up to two orders of magnitude, resulting in protein libraries with complexities of greater than 10(13). We have used this methodology to generate three types of displayed protein library: a completely random sequence library, a library of concatemerized oligopeptide cassettes with a propensity for forming amphipathic alpha-helical or beta-strand structures, and a library based on one of the most common enzymatic scaffolds, the alpha/beta (TIM) barrel. Copyright 2000 Academic Press.

  17. What Is Your Library Worth? Extension Uses Public Value Workshops in Communities

    ERIC Educational Resources Information Center

    Haskell, Jane E.; Morse, George W.

    2015-01-01

    Public libraries are seeing flat or reduced funding even as demands for new services are increasing. Facing an identical problem, Extension developed a program to identify the indirect benefits to non-participants of Extension programs in order to encourage their public funding support. This educational approach was customized to public libraries…

  18. The Advance of Computing from the Ground to the Cloud

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2009-01-01

    A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

  19. Off-Line Catalog Production.

    ERIC Educational Resources Information Center

    OCLC Online Computer Library Center, Inc., Dublin, OH.

    The Ohio College Library Center's off-line catalog system is a limited technique for production of card catalogs. Unlike the on-line system, it cannotmake the resources of a region available to users in an individual institution, and it does not have the potential for significantly reducing rate of rise of library per-unit costs. In short, it is…

  20. The Utility of Electronic Mail Follow-Ups for Library Research.

    ERIC Educational Resources Information Center

    Roselle, Ann; Neufeld, Steven

    1998-01-01

    A survey of academic librarians determined that the use of e-mail in the follow-up stage of a library research project using mailed questionnaires was as effective as postal mail in speed and size of response. Discusses additional benefits (interpersonal communication, reduced time and costs) and drawbacks (time spent identifying messages…

  1. Wrestling with a Trojan Horse: Outsourcing Cataloging in Academic and Special Libraries.

    ERIC Educational Resources Information Center

    Abel-Kops, Chad P.

    2000-01-01

    Focuses the issue of outsourcing cataloging in academic and special libraries. Examines the goal of outsourcing to increase production at reduced costs; a case of outsourcing at Wright State University of Ohio; confidentiality issues of outsourcing by law firms and other corporations; contracts; debates on "whether" and "what"…

  2. Books Matter: The Place of Traditional Books in Tomorrow's Library

    ERIC Educational Resources Information Center

    Megarrity, Lyndon

    2010-01-01

    People who love books can find entering an Australian library in the so-called "cyber-age" to be an unsettling experience. The first thing you notice is the reduced emphasis on book shelves in favour of empty but architecturally pleasing "public spaces", comfortable cushions, computer terminals, sometimes even new cafes and…

  3. Placing the Library at the Heart of Plagiarism Prevention: The University of Bradford Experience

    ERIC Educational Resources Information Center

    George, Sarah; Costigan, Anne; O'hara, Maria

    2013-01-01

    Plagiarism is a vexing issue for Higher Education, affecting student transition, retention, and attainment. This article reports on two initiatives from the University of Bradford library aimed at reducing student plagiarism. The first initiative is an intensive course for students who have contravened plagiarism regulations. The second course…

  4. OCLC Research: 2012 Activity Report

    ERIC Educational Resources Information Center

    OCLC Online Computer Library Center, Inc., 2013

    2013-01-01

    The mission of the Online Computer Library Center (OCLC) Research is to expand knowledge that advances OCLC's public purposes of furthering access to the world's information and reducing library costs. OCLC Research is dedicated to three roles: (1)To act as a community resource for shared research and development (R&D); (2) To provide advanced…

  5. Essential issues in the design of shared document/image libraries

    NASA Astrophysics Data System (ADS)

    Gladney, Henry M.; Mantey, Patrick E.

    1990-08-01

    We consider what is needed to create electronic document libraries which mimic physical collections of books, papers, and other media. The quantitative measures of merit for personal workstations-cost, speed, size of volatile and persistent storage-will improve by at least an order ofmagnitude in the next decade. Every professional worker will be able to afford a very powerful machine, but databases and libraries are not really economical and useful unless they are shared. We therefore see a two-tier world emerging, in which custodians of information make it available to network-attached workstations. A client-server model is the natural description of this world. In collaboration with several state governments, we have considered what would be needed to replace paper-based record management for a dozen different applications. We find that a professional worker can anticipate most data needs and that (s)he is interested in each clump of data for a period of days to months. We further find that only a small fraction of any collection will be used in any period. Given expected bandwidths, data sizes, search times and costs, and other such parameters, an effective strategy to support user interaction is to bring large clumps from their sources, to transform them into convenient representations, and only then start whatever investigation is intended. A system-managed hierarchy of caches and archives is indicated. Each library is a combination of a catalog and a collection, and each stored item has a primary instance which is the standard by which the correctness of any copy is judged. Catalog records mostly refer to 1 to 3 stored items. Weighted by the number of bytes to be stored, immutable data dominate collections. These characteristics affect how consistency, currency, and access control of replicas distributed in the network should be managed. We present the large features of a design for network docun1ent/image library services. A prototype is being built for State of California pilot applications. The design allows library servers in any environment with an ANSI SQL database; clients execute in any environment; conimunications are with either TCP/IP or SNA LU 6.2.

  6. Small RNA analysis in Petunia hybrida identifies unusual tissue-specific expression patterns of conserved miRNAs and of a 24mer RNA

    PubMed Central

    Tedder, Philip; Zubko, Elena; Westhead, David R.; Meyer, Peter

    2009-01-01

    Two pools of small RNAs were cloned from inflorescences of Petunia hybrida using a 5′-ligation dependent and a 5′-ligation independent approach. The two libraries were integrated into a public website that allows the screening of individual sequences against 359,769 unique clones. The library contains 15 clones with 100% identity and 53 clones with one mismatch to miRNAs described for other plant species. For two conserved miRNAs, miR159 and miR390, we find clear differences in tissue-specific distribution, compared with other species. This shows that evolutionary conservation of miRNA sequences does not necessarily include a conservation of the miRNA expression profile. Almost 60% of all clones in the database are 24-nucleotide clones. In accordance with the role of 24mers in marking repetitive regions, we find them distributed across retroviral and transposable element sequences but other 24mers map to promoter regions and to different transcript regions. For one target region we observe tissue-specific variation of matching 24mers, which demonstrates that, as for 21mers, 24mer concentrations are not necessarily identical in different tissues. Asymmetric distribution of a putative novel miRNA in the two libraries suggests that the cloning method can be selective for the representation of certain small RNAs in a collection. PMID:19369427

  7. A cluster-based strategy for assessing the overlap between large chemical libraries and its application to a recent acquisition.

    PubMed

    Engels, Michael F M; Gibbs, Alan C; Jaeger, Edward P; Verbinnen, Danny; Lobanov, Victor S; Agrafiotis, Dimitris K

    2006-01-01

    We report on the structural comparison of the corporate collections of Johnson & Johnson Pharmaceutical Research & Development (JNJPRD) and 3-Dimensional Pharmaceuticals (3DP), performed in the context of the recent acquisition of 3DP by JNJPRD. The main objective of the study was to assess the druglikeness of the 3DP library and the extent to which it enriched the chemical diversity of the JNJPRD corporate collection. The two databases, at the time of acquisition, collectively contained more than 1.1 million compounds with a clearly defined structural description. The analysis was based on a clustering approach and aimed at providing an intuitive quantitative estimate and visual representation of this enrichment. A novel hierarchical clustering algorithm called divisive k-means was employed in combination with Kelley's cluster-level selection method to partition the combined data set into clusters, and the diversity contribution of each library was evaluated as a function of the relative occupancy of these clusters. Typical 3DP chemotypes enriching the diversity of the JNJPRD collection were catalogued and visualized using a modified maximum common substructure algorithm. The joint collection of JNJPRD and 3DP compounds was also compared to other databases of known medicinally active or druglike compounds. The potential of the methodology for the analysis of very large chemical databases is discussed.

  8. Dynamic circuitry for updating spatial representations. II. Physiological evidence for interhemispheric transfer in area LIP of the split-brain macaque.

    PubMed

    Heiser, Laura M; Berman, Rebecca A; Saunders, Richard C; Colby, Carol L

    2005-11-01

    With each eye movement, a new image impinges on the retina, yet we do not notice any shift in visual perception. This perceptual stability indicates that the brain must be able to update visual representations to take our eye movements into account. Neurons in the lateral intraparietal area (LIP) update visual representations when the eyes move. The circuitry that supports these updated representations remains unknown, however. In this experiment, we asked whether the forebrain commissures are necessary for updating in area LIP when stimulus representations must be updated from one visual hemifield to the other. We addressed this question by recording from LIP neurons in split-brain monkeys during two conditions: stimulus traces were updated either across or within hemifields. Our expectation was that across-hemifield updating activity in LIP would be reduced or abolished after transection of the forebrain commissures. Our principal finding is that LIP neurons can update stimulus traces from one hemifield to the other even in the absence of the forebrain commissures. This finding provides the first evidence that representations in parietal cortex can be updated without the use of direct cortico-cortical links. The second main finding is that updating activity in LIP is modified in the split-brain monkey: across-hemifield signals are reduced in magnitude and delayed in onset compared with within-hemifield signals, which indicates that the pathways for across-hemifield updating are less effective in the absence of the forebrain commissures. Together these findings reveal a dynamic circuit that contributes to updating spatial representations.

  9. Economical analysis of saturation mutagenesis experiments

    PubMed Central

    Acevedo-Rocha, Carlos G.; Reetz, Manfred T.; Nov, Yuval

    2015-01-01

    Saturation mutagenesis is a powerful technique for engineering proteins, metabolic pathways and genomes. In spite of its numerous applications, creating high-quality saturation mutagenesis libraries remains a challenge, as various experimental parameters influence in a complex manner the resulting diversity. We explore from the economical perspective various aspects of saturation mutagenesis library preparation: We introduce a cheaper and faster control for assessing library quality based on liquid media; analyze the role of primer purity and supplier in libraries with and without redundancy; compare library quality, yield, randomization efficiency, and annealing bias using traditional and emergent randomization schemes based on mixtures of mutagenic primers; and establish a methodology for choosing the most cost-effective randomization scheme given the screening costs and other experimental parameters. We show that by carefully considering these parameters, laboratory expenses can be significantly reduced. PMID:26190439

  10. Optimizing Illumina next-generation sequencing library preparation for extremely AT-biased genomes.

    PubMed

    Oyola, Samuel O; Otto, Thomas D; Gu, Yong; Maslen, Gareth; Manske, Magnus; Campino, Susana; Turner, Daniel J; Macinnis, Bronwyn; Kwiatkowski, Dominic P; Swerdlow, Harold P; Quail, Michael A

    2012-01-03

    Massively parallel sequencing technology is revolutionizing approaches to genomic and genetic research. Since its advent, the scale and efficiency of Next-Generation Sequencing (NGS) has rapidly improved. In spite of this success, sequencing genomes or genomic regions with extremely biased base composition is still a great challenge to the currently available NGS platforms. The genomes of some important pathogenic organisms like Plasmodium falciparum (high AT content) and Mycobacterium tuberculosis (high GC content) display extremes of base composition. The standard library preparation procedures that employ PCR amplification have been shown to cause uneven read coverage particularly across AT and GC rich regions, leading to problems in genome assembly and variation analyses. Alternative library-preparation approaches that omit PCR amplification require large quantities of starting material and hence are not suitable for small amounts of DNA/RNA such as those from clinical isolates. We have developed and optimized library-preparation procedures suitable for low quantity starting material and tolerant to extremely high AT content sequences. We have used our optimized conditions in parallel with standard methods to prepare Illumina sequencing libraries from a non-clinical and a clinical isolate (containing ~53% host contamination). By analyzing and comparing the quality of sequence data generated, we show that our optimized conditions that involve a PCR additive (TMAC), produces amplified libraries with improved coverage of extremely AT-rich regions and reduced bias toward GC neutral templates. We have developed a robust and optimized Next-Generation Sequencing library amplification method suitable for extremely AT-rich genomes. The new amplification conditions significantly reduce bias and retain the complexity of either extremes of base composition. This development will greatly benefit sequencing clinical samples that often require amplification due to low mass of DNA starting material.

  11. The value of Web-based library services at Cedars-Sinai Health System.

    PubMed

    Halub, L P

    1999-07-01

    Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services.

  12. The value of Web-based library services at Cedars-Sinai Health System.

    PubMed Central

    Halub, L P

    1999-01-01

    Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services. PMID:10427423

  13. Large-scale virtual screening on public cloud resources with Apache Spark.

    PubMed

    Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola

    2017-01-01

    Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.

  14. Systematic Testing of Belief-Propagation Estimates for Absolute Free Energies in Atomistic Peptides and Proteins.

    PubMed

    Donovan-Maiye, Rory M; Langmead, Christopher J; Zuckerman, Daniel M

    2018-01-09

    Motivated by the extremely high computing costs associated with estimates of free energies for biological systems using molecular simulations, we further the exploration of existing "belief propagation" (BP) algorithms for fixed-backbone peptide and protein systems. The precalculation of pairwise interactions among discretized libraries of side-chain conformations, along with representation of protein side chains as nodes in a graphical model, enables direct application of the BP approach, which requires only ∼1 s of single-processor run time after the precalculation stage. We use a "loopy BP" algorithm, which can be seen as an approximate generalization of the transfer-matrix approach to highly connected (i.e., loopy) graphs, and it has previously been applied to protein calculations. We examine the application of loopy BP to several peptides as well as the binding site of the T4 lysozyme L99A mutant. The present study reports on (i) the comparison of the approximate BP results with estimates from unbiased estimators based on the Amber99SB force field; (ii) investigation of the effects of varying library size on BP predictions; and (iii) a theoretical discussion of the discretization effects that can arise in BP calculations. The data suggest that, despite their approximate nature, BP free-energy estimates are highly accurate-indeed, they never fall outside confidence intervals from unbiased estimators for the systems where independent results could be obtained. Furthermore, we find that libraries of sufficiently fine discretization (which diminish library-size sensitivity) can be obtained with standard computing resources in most cases. Altogether, the extremely low computing times and accurate results suggest the BP approach warrants further study.

  15. A 3D Kinematic Measurement of Knee Prosthesis Using X-ray Projection Images

    NASA Astrophysics Data System (ADS)

    Hirokawa, Shunji; Ariyoshi, Shogo; Hossain, Mohammad Abrar

    We have developed a technique for estimating 3D motion of knee prosthesis from its 2D perspective projections. As Fourier descriptors were used for compact representation of library templates and contours extracted from the prosthetic X-ray images, the entire silhouette contour of each prosthetic component was required. This caused such a problem as our algorithm did not function when the silhouettes of tibio and femoral components overlapped with each other. Here we planned a novel method to overcome it; which was processed in two steps. First, the missing part of silhouette contour due to overlap was interpolated using a free-formed curvature such as Bezier. Then the first step position/orientation estimation was performed. In the next step, a clipping window was set in the projective coordinate so as to separate the overlapped silhouette drawn using the first step estimates. After that the localized library whose templates were clipped in shape was prepared and the second step estimation was performed. Computer model simulation demonstrated sufficient accuracies of position/orientation estimation even for overlapped silhouettes; equivalent to those without overlap.

  16. Uncovering collective listening habits and music genres in bipartite networks.

    PubMed

    Lambiotte, R; Ausloos, M

    2005-12-01

    In this paper, we analyze web-downloaded data on people sharing their music library, that we use as their individual musical signatures. The system is represented by a bipartite network, nodes being the music groups and the listeners. Music groups' audience size behaves like a power law, but the individual music library size is an exponential with deviations at small values. In order to extract structures from the network, we focus on correlation matrices, that we filter by removing the least correlated links. This percolation idea-based method reveals the emergence of social communities and music genres, that are visualized by a branching representation. Evidence of collective listening habits that do not fit the neat usual genres defined by the music industry indicates an alternative way of classifying listeners and music groups. The structure of the network is also studied by a more refined method, based upon a random walk exploration of its properties. Finally, a personal identification-community imitation model for growing bipartite networks is outlined, following Potts ingredients. Simulation results do reproduce quite well the empirical data.

  17. Access control and privilege management in electronic health record: a systematic literature review.

    PubMed

    Jayabalan, Manoj; O'Daniel, Thomas

    2016-12-01

    This study presents a systematic literature review of access control for electronic health record systems to protect patient's privacy. Articles from 2006 to 2016 were extracted from the ACM Digital Library, IEEE Xplore Digital Library, Science Direct, MEDLINE, and MetaPress using broad eligibility criteria, and chosen for inclusion based on analysis of ISO22600. Cryptographic standards and methods were left outside the scope of this review. Three broad classes of models are being actively investigated and developed: access control for electronic health records, access control for interoperability, and access control for risk analysis. Traditional role-based access control models are extended with spatial, temporal, probabilistic, dynamic, and semantic aspects to capture contextual information and provide granular access control. Maintenance of audit trails and facilities for overriding normal roles to allow full access in emergency cases are common features. Access privilege frameworks utilizing ontology-based knowledge representation for defining the rules have attracted considerable interest, due to the higher level of abstraction that makes it possible to model domain knowledge and validate access requests efficiently.

  18. Uncovering collective listening habits and music genres in bipartite networks

    NASA Astrophysics Data System (ADS)

    Lambiotte, R.; Ausloos, M.

    2005-12-01

    In this paper, we analyze web-downloaded data on people sharing their music library, that we use as their individual musical signatures. The system is represented by a bipartite network, nodes being the music groups and the listeners. Music groups’ audience size behaves like a power law, but the individual music library size is an exponential with deviations at small values. In order to extract structures from the network, we focus on correlation matrices, that we filter by removing the least correlated links. This percolation idea-based method reveals the emergence of social communities and music genres, that are visualized by a branching representation. Evidence of collective listening habits that do not fit the neat usual genres defined by the music industry indicates an alternative way of classifying listeners and music groups. The structure of the network is also studied by a more refined method, based upon a random walk exploration of its properties. Finally, a personal identification-community imitation model for growing bipartite networks is outlined, following Potts ingredients. Simulation results do reproduce quite well the empirical data.

  19. JSBML: a flexible Java library for working with SBML.

    PubMed

    Dräger, Andreas; Rodriguez, Nicolas; Dumousseau, Marine; Dörr, Alexander; Wrzodek, Clemens; Le Novère, Nicolas; Zell, Andreas; Hucka, Michael

    2011-08-01

    The specifications of the Systems Biology Markup Language (SBML) define standards for storing and exchanging computer models of biological processes in text files. In order to perform model simulations, graphical visualizations and other software manipulations, an in-memory representation of SBML is required. We developed JSBML for this purpose. In contrast to prior implementations of SBML APIs, JSBML has been designed from the ground up for the Java programming language, and can therefore be used on all platforms supported by a Java Runtime Environment. This offers important benefits for Java users, including the ability to distribute software as Java Web Start applications. JSBML supports all SBML Levels and Versions through Level 3 Version 1, and we have strived to maintain the highest possible degree of compatibility with the popular library libSBML. JSBML also supports modules that can facilitate the development of plugins for end user applications, as well as ease migration from a libSBML-based backend. Source code, binaries and documentation for JSBML can be freely obtained under the terms of the LGPL 2.1 from the website http://sbml.org/Software/JSBML.

  20. Interdisciplinary multiinstitutional alliances in support of educational programs for health sciences librarians.

    PubMed Central

    Smith, L C

    1996-01-01

    This project responds to the need to identify the knowledge, skills, and expertise required by health sciences librarians in the future and to devise mechanisms for providing this requisite training. The approach involves interdisciplinary multiinstitutional alliances with collaborators drawn from two graduate schools of library and information science (University of Illinois at Urbana-Champaign and Indiana University) and two medical schools (University of Illinois at Chicago and Washington University). The project encompasses six specific aims: (1) investigate the evolving role of the health sciences librarian; (2) analyze existing programs of study in library and information science at all levels at Illinois and Indiana; (3) develop opportunities for practicums, internships, and residencies; (4) explore the possibilities of computing and communication technologies to enhance instruction; (5) identify mechanisms to encourage faculty and graduate students to participate in medical informatics research projects; and (6) create recruitment strategies to achieve better representation of currently underrepresented groups. The project can serve as a model for other institutions interested in regional collaboration to enhance graduate education for health sciences librarianship. PMID:8913560

  1. A multidimensional representation model of geographic features

    USGS Publications Warehouse

    Usery, E. Lynn; Timson, George; Coletti, Mark

    2016-01-28

    A multidimensional model of geographic features has been developed and implemented with data from The National Map of the U.S. Geological Survey. The model, programmed in C++ and implemented as a feature library, was tested with data from the National Hydrography Dataset demonstrating the capability to handle changes in feature attributes, such as increases in chlorine concentration in a stream, and feature geometry, such as the changing shoreline of barrier islands over time. Data can be entered directly, from a comma separated file, or features with attributes and relationships can be automatically populated in the model from data in the Spatial Data Transfer Standard format.

  2. France, Germany, Greece and the United Kingdom: An Analysis and Comparison of Budget Deficits and Defense Spending

    DTIC Science & Technology

    2011-09-01

    reduced to avoid inflation (Mitchell, 2005).” In the mid-1950s, Milton Friedman was arguing against Keynes’ theories on how to foster economic growth...economics edited by David H. Henderson. “ Friedman .” Retrieved from: http://www.econlib.org/library/Enc/bios/Friedman.html Library of Economics and

  3. Creating Open Textbooks: A Unique Partnership between Oregon State University Libraries and Press and Open Oregon State

    ERIC Educational Resources Information Center

    Chadwell, Faye A.; Fisher, Dianna M.

    2016-01-01

    This article presents Oregon State University's experience launching an innovative Open Textbook initiative in spring 2014. The partners, Open Oregon State and the Oregon State University Libraries and Press, aimed to reduce the cost of course materials for students while ensuring the content created was peer-reviewed and employed multimedia…

  4. Scholarly Communication and the Dilemma of Collective Action: Why Academic Journals Cost Too Much

    ERIC Educational Resources Information Center

    Wenzler, John

    2017-01-01

    Why has the rise of the Internet--which drastically reduces the cost of distributing information--coincided with drastic increases in the prices that academic libraries pay for access to scholarly journals? This study argues that libraries are trapped in a collective action dilemma as defined by economist Mancur Olson in "The Logic of…

  5. Multiresolution representation and numerical algorithms: A brief review

    NASA Technical Reports Server (NTRS)

    Harten, Amiram

    1994-01-01

    In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.

  6. Automated graphic image generation system for effective representation of infectious disease surveillance data.

    PubMed

    Inoue, Masashi; Hasegawa, Shinsaku; Suyama, Akihiko; Meshitsuka, Shunsuke

    2003-11-01

    Infectious disease surveillance schemes have been established to detect infectious disease outbreak in the early stages, to identify the causative viral strains, and to rapidly assess related morbidity and mortality. To make a scheme function well, two things are required. Firstly, it must have sufficient sensitivity and be timely to guarantee as short a delay as possible from collection to redistribution of information. Secondly, it must provide a good representation of the results of the surveillance. To do this, we have developed a database system that can redistribute the information via the Internet. The feature of this system is to automatically generate the graphic images based on the numerical data stored in the database by using Hypertext Preprocessor (PHP) script and Graphics Drawing (GD) library. It dynamically displays the information as a map or bar chart as well as a numerical impression according to the real time demand of the users. This system will be a useful tool for medical personnel and researchers working on infectious disease problems and will save significant time in the redistribution of information.

  7. Changing Mental Models of the IT Professions: A Theoretical Framework

    ERIC Educational Resources Information Center

    Agosto, Denise E.; Gasson, Susan; Atwood, Michael

    2008-01-01

    It is widely recognized that the current and projected shortage of adequately-educated IT professionals could be greatly reduced if more female and minority students would major in IT disciplines, yet the dramatic under-representation of these populations appears to be worsening. This under-representation is reflected in Drexel University's…

  8. Systematic cloning of human minisatellites from ordered array charomid libraries.

    PubMed

    Armour, J A; Povey, S; Jeremiah, S; Jeffreys, A J

    1990-11-01

    We present a rapid and efficient method for the isolation of minisatellite loci from human DNA. The method combines cloning a size-selected fraction of human MboI DNA fragments in a charomid vector with hybridization screening of the library in ordered array. Size-selection of large MboI fragments enriches for the longer, more variable minisatellites and reduces the size of the library required. The library was screened with a series of multi-locus probes known to detect a large number of hypervariable loci in human DNA. The gridded library allowed both the rapid processing of positive clones and the comparative evaluation of the different multi-locus probes used, in terms of both the relative success in detecting hypervariable loci and the degree of overlap between the sets of loci detected. We report 23 new human minisatellite loci isolated by this method, which map to 14 autosomes and the sex chromosomes.

  9. Delivering a MOOC for literature searching in health libraries: evaluation of a pilot project.

    PubMed

    Young, Gil; McLaren, Lisa; Maden, Michelle

    2017-12-01

    In an era when library budgets are being reduced, Massive Online Open Courses (MOOC's) can offer practical and viable alternatives to the delivery of costly face-to-face training courses. In this study, guest writers Gil Young from Health Care Libraries Unit - North, Lisa McLaren from Brighton and Sussex Medical School and Liverpool University PhD student Michelle Maden describe the outcomes of a funded project they led to develop a MOOC to deliver literature search training for health librarians. Funded by Health Education England, the MOOC was developed by the Library and Information Health Network North West as a pilot project that ran for six weeks. In particular, the MOOC target audience is discussed, how content was developed for the MOOC, promotion and participation, cost-effectiveness, evaluation, the impact of the MOOC and recommendations for future development. H. S. © 2017 Health Libraries Group.

  10. Library management in the tight budget seventies. Problems, challenges, and opportunities.

    PubMed

    White, H S

    1977-01-01

    This paper examines changes in the management of university, special, and medical libraries brought about by the budget curtailments that followed the more affluent funding period of the mid-1960s. Based on a study conducted for the National Science Foundation by the Indiana University Graduate Library School, this paper deals with misconceptions that have arisen in the relationship between publishers and librarians, and differentiates between the priority perceptions of academic and of special librarians in the allocation of progressively scarcer resources. It concludes that libraries must make strong efforts to reduce the growing erosion of materials acquisitions budgets because of growing labor costs as a percentage of all library expenditures; that they must make a working reality of the resource-sharing mechanisms established through consortia and networks; and that they must use advanced evaluative techniques in the determination of which services and programs to implement, expand, and retain, and which to curtail and abandon.

  11. One small community hospital library's successful outsourcing of document delivery: an ongoing study.

    PubMed

    Haas, V

    2000-01-01

    When DOCLINE was implemented in 1985, community hospital librarians were beginning to feel the economic pressures of the changing health care arena. However, staff and resources were often sufficient or plentiful. Now, fifteen years after the creation of DOCLINE, many existing small hospitals either no longer have a librarian, an assistant is managing the library, the librarian is managing one or more libraries of an integrated system, or the number of librarians has been reduced. A system that is heavily staff dependent is no longer feasible. In addition, as the role of the community hospital librarian evolves into one of instructor and patient education liaison, a system that does not permit the librarian to expand such services will be detrimental to the entire library program. Following is a discussion of one small community hospital's decision to outsource document delivery services as a result of staffing changes and the expansion of additional library programs.

  12. DNA-Encoded Chemical Libraries: A Selection System Based on Endowing Organic Compounds with Amplifiable Information.

    PubMed

    Neri, Dario; Lerner, Richard A

    2018-06-20

    The discovery of organic ligands that bind specifically to proteins is a central problem in chemistry, biology, and the biomedical sciences. The encoding of individual organic molecules with distinctive DNA tags, serving as amplifiable identification bar codes, allows the construction and screening of combinatorial libraries of unprecedented size, thus facilitating the discovery of ligands to many different protein targets. Fundamentally, one links powers of genetics and chemical synthesis. After the initial description of DNA-encoded chemical libraries in 1992, several experimental embodiments of the technology have been reduced to practice. This review provides a historical account of important milestones in the development of DNA-encoded chemical libraries, a survey of relevant ongoing research activities, and a glimpse into the future.

  13. Development of a standardized, citywide process for managing smart-pump drug libraries.

    PubMed

    Walroth, Todd A; Smallwood, Shannon; Arthur, Karen; Vance, Betsy; Washington, Alana; Staublin, Therese; Haslar, Tammy; Reddan, Jennifer G; Fuller, James

    2018-06-15

    Development and implementation of an interprofessional consensus-driven process for review and optimization of smart-pump drug libraries and dosing limits are described. The Indianapolis Coalition for Patient Safety (ICPS), which represents 6 Indianapolis-area health systems, identified an opportunity to reduce clinically insignificant alerts that smart infusion pumps present to end users. Through a consensus-driven process, ICPS aimed to identify best practices to implement at individual hospitals in order to establish specific action items for smart-pump drug library optimization. A work group of pharmacists, nurses, and industrial engineers met to evaluate variability within and lack of scrutiny of smart-pump drug libraries. The work group used Lean Six Sigma methodologies to generate a list of key needs and barriers to be addressed in process standardization. The group reviewed targets for smart-pump drug library optimization, including dosing limits, types of alerts reviewed, policies, and safety best practices. The work group also analyzed existing processes at each site to develop a final consensus statement outlining a model process for reviewing alerts and managing smart-pump data. Analysis of the total number of alerts per device across ICPS-affiliated health systems over a 4-year period indicated a 50% decrease (from 7.2 to 3.6 alerts per device per month) after implementation of the model by ICPS member organizations. Through implementation of a standardized, consensus-driven process for smart-pump drug library optimization, ICPS member health systems reduced clinically insignificant smart-pump alerts. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  14. Mapping of Drug-like Chemical Universe with Reduced Complexity Molecular Frameworks.

    PubMed

    Kontijevskis, Aleksejs

    2017-04-24

    The emergence of the DNA-encoded chemical libraries (DEL) field in the past decade has attracted the attention of the pharmaceutical industry as a powerful mechanism for the discovery of novel drug-like hits for various biological targets. Nuevolution Chemetics technology enables DNA-encoded synthesis of billions of chemically diverse drug-like small molecule compounds, and the efficient screening and optimization of these, facilitating effective identification of drug candidates at an unprecedented speed and scale. Although many approaches have been developed by the cheminformatics community for the analysis and visualization of drug-like chemical space, most of them are restricted to the analysis of a maximum of a few millions of compounds and cannot handle collections of 10 8 -10 12 compounds typical for DELs. To address this big chemical data challenge, we developed the Reduced Complexity Molecular Frameworks (RCMF) methodology as an abstract and very general way of representing chemical structures. By further introducing RCMF descriptors, we constructed a global framework map of drug-like chemical space and demonstrated how chemical space occupied by multi-million-member drug-like Chemetics DNA-encoded libraries and virtual combinatorial libraries with >10 12 members could be analyzed and mapped without a need for library enumeration. We further validate the approach by performing RCMF-based searches in a drug-like chemical universe and mapping Chemetics library selection outputs for LSD1 targets on a global framework chemical space map.

  15. Procurement of Shared Data Instruments for Research Electronic Data Capture (REDCap)

    PubMed Central

    Obeid, Jihad S; McGraw, Catherine A; Minor, Brenda L; Conde, José G; Pawluk, Robert; Lin, Michael; Wang, Janey; Banks, Sean R; Hemphill, Sheree A; Taylor, Rob; Harris, Paul A

    2012-01-01

    REDCap (Research Electronic Data Capture) is a web-based software solution and tool set that allows biomedical researchers to create secure online forms for data capture, management and analysis with minimal effort and training. The Shared Data Instrument Library (SDIL) is a relatively new component of REDCap that allows sharing of commonly used data collection instruments for immediate study use by 3 research teams. Objectives of the SDIL project include: 1) facilitating reuse of data dictionaries and reducing duplication of effort; 2) promoting the use of validated data collection instruments, data standards and best practices; and 3) promoting research collaboration and data sharing. Instruments submitted to the library are reviewed by a library oversight committee, with rotating membership from multiple institutions, which ensures quality, relevance and legality of shared instruments. The design allows researchers to download the instruments in a consumable electronic format in the REDCap environment. At the time of this writing, the SDIL contains over 128 data collection instruments. Over 2500 instances of instruments have been downloaded by researchers at multiple institutions. In this paper we describe the library platform, provide detail about experience gained during the first 25 months of sharing public domain instruments and provide evidence of impact for the SDIL across the REDCap consortium research community. We postulate that the shared library of instruments reduces the burden of adhering to sound data collection principles while promoting best practices. PMID:23149159

  16. A Fixed Point VHDL Component Library for a High Efficiency Reconfigurable Radio Design Methodology

    NASA Technical Reports Server (NTRS)

    Hoy, Scott D.; Figueiredo, Marco A.

    2006-01-01

    Advances in Field Programmable Gate Array (FPGA) technologies enable the implementation of reconfigurable radio systems for both ground and space applications. The development of such systems challenges the current design paradigms and requires more robust design techniques to meet the increased system complexity. Among these techniques is the development of component libraries to reduce design cycle time and to improve design verification, consequently increasing the overall efficiency of the project development process while increasing design success rates and reducing engineering costs. This paper describes the reconfigurable radio component library developed at the Software Defined Radio Applications Research Center (SARC) at Goddard Space Flight Center (GSFC) Microwave and Communications Branch (Code 567). The library is a set of fixed-point VHDL components that link the Digital Signal Processing (DSP) simulation environment with the FPGA design tools. This provides a direct synthesis path based on the latest developments of the VHDL tools as proposed by the BEE VBDL 2004 which allows for the simulation and synthesis of fixed-point math operations while maintaining bit and cycle accuracy. The VHDL Fixed Point Reconfigurable Radio Component library does not require the use of the FPGA vendor specific automatic component generators and provide a generic path from high level DSP simulations implemented in Mathworks Simulink to any FPGA device. The access to the component synthesizable, source code provides full design verification capability:

  17. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    NASA Astrophysics Data System (ADS)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  18. Vertical distribution of major sulfate-reducing bacteria in a shallow eutrophic meromictic lake.

    PubMed

    Kubo, Kyoko; Kojima, Hisaya; Fukui, Manabu

    2014-10-01

    The vertical distribution of sulfate-reducing bacteria was investigated in a shallow, eutrophic, meromictic lake, Lake Harutori, located in a residential area of Kushiro, Japan. A steep chemocline, characterized by gradients of oxygen, sulfide and salinity, was found at a depth of 3.5-4.0 m. The sulfide concentration at the bottom of the lake was high (up to a concentration of 10.7 mM). Clone libraries were constructed using the aprA gene, which encodes adenosine-5'-phosphosulfate reductase subunit A, in order to monitor sulfate-reducing bacteria. In the aprA clone libraries, the most abundant sequences were those from the Desulfosarcina-Desulfococcus (DSS) group. A primer set for a DSS group-specific 16S rRNA gene was used to construct another clone library, analysis of which revealed that the uncultured group of sulfate-reducing bacteria, SEEP SRB-1, accounted for nearly half of the obtained sequences. Quantification of the major bacterial groups by catalyzed reporter deposition-fluorescence in situ hybridization demonstrated that the DSS group accounted for 3.2-4.8% of the total bacterial community below the chemocline. The results suggested that the DSS group was one of the major groups of sulfate-reducing bacteria and that these presumably metabolically versatile bacteria might play an important role in sulfur cycling in Lake Harutori. Copyright © 2014 Elsevier GmbH. All rights reserved.

  19. How Transparent Oxides Gain Some Color: Discovery of a CeNiO3 Reduced Bandgap Phase As an Absorber for Photovoltaics.

    PubMed

    Barad, Hannah-Noa; Keller, David A; Rietwyk, Kevin J; Ginsburg, Adam; Tirosh, Shay; Meir, Simcha; Anderson, Assaf Y; Zaban, Arie

    2018-06-11

    In this work, we describe the formation of a reduced bandgap CeNiO 3 phase, which, to our knowledge, has not been previously reported, and we show how it is utilized as an absorber layer in a photovoltaic cell. The CeNiO 3 phase is prepared by a combinatorial materials science approach, where a library containing a continuous compositional spread of Ce x Ni 1- x O y is formed by pulsed laser deposition (PLD); a method that has not been used in the past to form Ce-Ni-O materials. The library displays a reduced bandgap throughout, calculated to be 1.48-1.77 eV, compared to the starting materials, CeO 2 and NiO, which each have a bandgap of ∼3.3 eV. The materials library is further analyzed by X-ray diffraction to determine a new crystalline phase. By searching and comparing to the Materials Project database, the reduced bandgap CeNiO 3 phase is realized. The CeNiO 3 reduced bandgap phase is implemented as the absorber layer in a solar cell and photovoltages up to 550 mV are achieved. The solar cells are also measured by surface photovoltage spectroscopy, which shows that the source of the photovoltaic activity is the reduced bandgap CeNiO 3 phase, making it a viable material for solar energy.

  20. Impact of economic crisis on the social representation of mental health: Analysis of a decade of newspaper coverage.

    PubMed

    Dias Neto, David; Figueiras, Maria João; Campos, Sónia; Tavares, Patrícia

    2017-12-01

    Mass media plays a fundamental role in how communities understand mental health and its treatment. However, the effect of major events such as economic crises on the depiction of mental health is still unclear. This study aimed at analyzing representations of mental health and its treatment and the impact of the 2008 economic crisis. In total, 1,000 articles were randomly selected from two newspapers from a period before and after the economic crisis. These articles were analyzed with a closed coding system that classified the news as good or bad news according to the presence of themes associated with positive or stigmatizing representations. The results show a positive representation of mental health and a negative representation of treatment. Furthermore, the economic crisis had a negative impact on the representation of mental health, but not on treatment. These findings suggest that the representation of mental health is multifaceted and may be affected differently in its dimensions. There is a need for stigma-reducing interventions that both account for this complexity and are sensitive to context and period.

  1. Positioning Your Library for Solar (and Financial) Gain. Improving Energy Efficiency, Lighting, and Ventilation with Primarily Passive Techniques

    ERIC Educational Resources Information Center

    Shane, Jackie

    2012-01-01

    This article stresses the importance of building design above technology as a relatively inexpensive way to reduce energy costs for a library. Emphasis is placed on passive solar design for heat and daylighting, but also examines passive ventilation and cooling, green roofs, and building materials. Passive design is weighed against technologies…

  2. The Placer Media Management System or Using the Computer in the Small Film Library.

    ERIC Educational Resources Information Center

    Luckey, Jacqueline

    In describing this media management system, which currently serves 84 public schools (K-12) in four rural counties east of Sacramento, this report suggests that the computer is a practical solution for film libraries trying to keep pace with increased use while not reducing their expenditures for purchasing and repairing film stock. The major…

  3. Gene recovery microdissection (GRM) a process for producing chromosome region-specific libraries of expressed genes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christian, A T; Coleman, M A; Tucker, J D

    2001-02-08

    Gene Recovery Microdissection (GRM) is a unique and cost-effective process for producing chromosome region-specific libraries of expressed genes. It accelerates the pace, reduces the cost, and extends the capabilities of functional genomic research, the means by which scientists will put to life-saving, life-enhancing use their knowledge of any plant or animal genome.

  4. Dryland biological soil crust cyanobacteria show unexpected decreases in abundance under long-term elevated CO2.

    PubMed

    Steven, Blaire; Gallegos-Graves, La Verne; Yeager, Chris M; Belnap, Jayne; Evans, R David; Kuske, Cheryl R

    2012-12-01

    Biological soil crusts (biocrusts) cover soil surfaces in many drylands globally. The impacts of 10 years of elevated atmospheric CO2 on the cyanobacteria in biocrusts of an arid shrubland were examined at a large manipulated experiment in Nevada, USA. Cyanobacteria-specific quantitative PCR surveys of cyanobacteria small-subunit (SSU) rRNA genes suggested a reduction in biocrust cyanobacterial biomass in the elevated CO2 treatment relative to the ambient controls. Additionally, SSU rRNA gene libraries and shotgun metagenomes showed reduced representation of cyanobacteria in the total microbial community. Taxonomic composition of the cyanobacteria was similar under ambient and elevated CO2 conditions, indicating the decline was manifest across multiple cyanobacterial lineages. Recruitment of cyanobacteria sequences from replicate shotgun metagenomes to cyanobacterial genomes representing major biocrust orders also suggested decreased abundance of cyanobacteria sequences across the majority of genomes tested. Functional assignment of cyanobacteria-related shotgun metagenome sequences indicated that four subsystem categories, three related to oxidative stress, were differentially abundant in relation to the elevated CO2 treatment. Taken together, these results suggest that elevated CO2 affected a generalized decrease in cyanobacteria in the biocrusts and may have favoured cyanobacteria with altered gene inventories for coping with oxidative stress. © 2012 Society for Applied Microbiology and Blackwell Publishing Ltd.

  5. Dryland biological soil crust cyanobacteria show unexpected decreases in abundance under long-term elevated CO2

    USGS Publications Warehouse

    Steven, Blaire; Gallegos-Graves, La Verne; Yeager, Chris M.; Belnap, Jayne; Evans, R. David; Kuske, Cheryl R.

    2012-01-01

    Biological soil crusts (biocrusts) cover soil surfaces in many drylands globally. The impacts of 10 years of elevated atmospheric CO2 on the cyanobacteria in biocrusts of an arid shrubland were examined at a large manipulated experiment in Nevada, USA. Cyanobacteria-specific quantitative PCR surveys of cyanobacteria small-subunit (SSU) rRNA genes suggested a reduction in biocrust cyanobacterial biomass in the elevated CO2 treatment relative to the ambient controls. Additionally, SSU rRNA gene libraries and shotgun metagenomes showed reduced representation of cyanobacteria in the total microbial community. Taxonomic composition of the cyanobacteria was similar under ambient and elevated CO2 conditions, indicating the decline was manifest across multiple cyanobacterial lineages. Recruitment of cyanobacteria sequences from replicate shotgun metagenomes to cyanobacterial genomes representing major biocrust orders also suggested decreased abundance of cyanobacteria sequences across the majority of genomes tested. Functional assignment of cyanobacteria-related shotgun metagenome sequences indicated that four subsystem categories, three related to oxidative stress, were differentially abundant in relation to the elevated CO2 treatment. Taken together, these results suggest that elevated CO2 affected a generalized decrease in cyanobacteria in the biocrusts and may have favoured cyanobacteria with altered gene inventories for coping with oxidative stress.

  6. Searching for microbial protein over-expression in a complex matrix using automated high throughput MS-based proteomics tools.

    PubMed

    Akeroyd, Michiel; Olsthoorn, Maurien; Gerritsma, Jort; Gutker-Vermaas, Diana; Ekkelkamp, Laurens; van Rij, Tjeerd; Klaassen, Paul; Plugge, Wim; Smit, Ed; Strupat, Kerstin; Wenzel, Thibaut; van Tilborg, Marcel; van der Hoeven, Rob

    2013-03-10

    In the discovery of new enzymes genomic and cDNA expression libraries containing thousands of differential clones are generated to obtain biodiversity. These libraries need to be screened for the activity of interest. Removing so-called empty and redundant clones significantly reduces the size of these expression libraries and therefore speeds up new enzyme discovery. Here, we present a sensitive, generic workflow for high throughput screening of successful microbial protein over-expression in microtiter plates containing a complex matrix based on mass spectrometry techniques. MALDI-LTQ-Orbitrap screening followed by principal component analysis and peptide mass fingerprinting was developed to obtain a throughput of ∼12,000 samples per week. Alternatively, a UHPLC-MS(2) approach including MS(2) protein identification was developed for microorganisms with a complex protein secretome with a throughput of ∼2000 samples per week. TCA-induced protein precipitation enhanced by addition of bovine serum albumin is used for protein purification prior to MS detection. We show that this generic workflow can effectively reduce large expression libraries from fungi and bacteria to their minimal size by detection of successful protein over-expression using MS. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Leveraging technology and staffing in developing a new liaison program.

    PubMed

    Williams, Jeff; McCrillis, Aileen; McGowan, Richard; Nicholson, Joey; Surkis, Alisa; Thompson, Holly; Vieira, Dorice

    2014-01-01

    With nearly all library resources and services delivered digitally, librarians working for the New York University Health Sciences Library struggled with maintaining awareness of changing user needs, understanding barriers faced in using library resources and services, and determining knowledge management challenges across the organization. A liaison program was created to provide opportunities for librarians to meaningfully engage with users. The program was directed toward a subset of high-priority user groups to provide focused engagement with these users. Responsibility for providing routine reference service was reduced for liaison librarians to provide maximum time to engage with their assigned user communities.

  8. Structure-activity relationships amongst 4-position quinoline methanol antimalarials that inhibit the growth of drug sensitive and resistant strains of Plasmodium falciparum.

    PubMed

    Milner, Erin; McCalmont, William; Bhonsle, Jayendra; Caridha, Diana; Carroll, Dustin; Gardner, Sean; Gerena, Lucia; Gettayacamin, Montip; Lanteri, Charlotte; Luong, Thulan; Melendez, Victor; Moon, Jay; Roncal, Norma; Sousa, Jason; Tungtaeng, Anchalee; Wipf, Peter; Dow, Geoffrey

    2010-02-15

    Utilizing mefloquine as a scaffold, a next generation quinoline methanol (NGQM) library was constructed to identify early lead compounds that possess biological properties consistent with the target product profile for malaria chemoprophylaxis while reducing permeability across the blood-brain barrier. The library of 200 analogs resulted in compounds that inhibit the growth of drug sensitive and resistant strains of Plasmodium falciparum. Herein we report selected chemotypes and the emerging structure-activity relationship for this library of quinoline methanols. Published by Elsevier Ltd.

  9. Using Time-Driven Activity-Based Costing to Implement Change.

    PubMed

    Sayed, Ellen N; Laws, Sa'ad; Uthman, Basim

    2017-01-01

    Academic medical libraries have responded to changes in technology, evolving professional roles, reduced budgets, and declining traditional services. Libraries that have taken a proactive role to change have seen their librarians emerge as collaborators and partners with faculty and researchers, while para-professional staff is increasingly overseeing traditional services. This article addresses shifting staff and schedules at a single-service-point information desk by using time-driven activity-based costing to determine the utilization of resources available to provide traditional library services. Opening hours and schedules were changed, allowing librarians to focus on patrons' information needs in their own environment.

  10. The Perceptions of General Education Teachers about the Over-Representation of Black Students in Special Education

    ERIC Educational Resources Information Center

    Grice, David Roland

    2012-01-01

    Statement of the Problem: There is an over-representation of Black students in special education. Black students are typically referred for special education consideration by the end of the fourth grade. One effort to reduce the large number of referrals in Connecticut was "Courageous Conversations About Race." Courageous Conversations…

  11. 20 CFR 702.217 - Penalty for false statement, misrepresentation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... representation for the purpose of obtaining a benefit or payment under this Act shall be guilty of a felony, and... or representation for the purpose of reducing, denying or terminating benefits to an injured employee, or his dependents pursuant to section 9, 33 U.S.C. 909, if the injury results in death, shall be...

  12. Lexical Representation of Schwa Words: Two Mackerels, but Only One Salami

    ERIC Educational Resources Information Center

    Burki, Audrey; Gaskell, M. Gareth

    2012-01-01

    The present study investigated the lexical representations underlying the production of English schwa words. Two types of schwa words were compared: words with a schwa in poststress position (e.g., mack"e"rel), whose schwa and reduced variants differ in a categorical way, and words with a schwa in prestress position (e.g.,…

  13. Representation and display of vector field topology in fluid flow data sets

    NASA Technical Reports Server (NTRS)

    Helman, James; Hesselink, Lambertus

    1989-01-01

    The visualization of physical processes in general and of vector fields in particular is discussed. An approach to visualizing flow topology that is based on the physics and mathematics underlying the physical phenomenon is presented. It involves determining critical points in the flow where the velocity vector vanishes. The critical points, connected by principal lines or planes, determine the topology of the flow. The complexity of the data is reduced without sacrificing the quantitative nature of the data set. By reducing the original vector field to a set of critical points and their connections, a representation of the topology of a two-dimensional vector field that is much smaller than the original data set but retains with full precision the information pertinent to the flow topology is obtained. This representation can be displayed as a set of points and tangent curves or as a graph. Analysis (including algorithms), display, interaction, and implementation aspects are discussed.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theophilou, Iris; Helbig, Nicole; Lathiotakis, Nektarios N.

    Functionals of the one-body reduced density matrix (1-RDM) are routinely minimized under Coleman’s ensemble N-representability conditions. Recently, the topic of pure-state N-representability conditions, also known as generalized Pauli constraints, received increased attention following the discovery of a systematic way to derive them for any number of electrons and any finite dimensionality of the Hilbert space. The target of this work is to assess the potential impact of the enforcement of the pure-state conditions on the results of reduced density-matrix functional theory calculations. In particular, we examine whether the standard minimization of typical 1-RDM functionals under the ensemble N-representability conditions violatesmore » the pure-state conditions for prototype 3-electron systems. We also enforce the pure-state conditions, in addition to the ensemble ones, for the same systems and functionals and compare the correlation energies and optimal occupation numbers with those obtained by the enforcement of the ensemble conditions alone.« less

  15. The development of retrosynthetic glycan libraries to profile and classify the human serum N-linked glycome.

    PubMed

    Kronewitter, Scott R; An, Hyun Joo; de Leoz, Maria Lorna; Lebrilla, Carlito B; Miyamoto, Suzanne; Leiserowitz, Gary S

    2009-06-01

    Annotation of the human serum N-linked glycome is a formidable challenge but is necessary for disease marker discovery. A new theoretical glycan library was constructed and proposed to provide all possible glycan compositions in serum. It was developed based on established glycobiology and retrosynthetic state-transition networks. We find that at least 331 compositions are possible in the serum N-linked glycome. By pairing the theoretical glycan mass library with a high mass accuracy and high-resolution MS, human serum glycans were effectively profiled. Correct isotopic envelope deconvolution to monoisotopic masses and the high mass accuracy instruments drastically reduced the amount of false composition assignments. The high throughput capacity enabled by this library permitted the rapid glycan profiling of large control populations. With the use of the library, a human serum glycan mass profile was developed from 46 healthy individuals. This paper presents a theoretical N-linked glycan mass library that was used for accurate high-throughput human serum glycan profiling. Rapid methods for evaluating a patient's glycome are instrumental for studying glycan-based markers.

  16. An optimized library for reference-based deconvolution of whole-blood biospecimens assayed using the Illumina HumanMethylationEPIC BeadArray.

    PubMed

    Salas, Lucas A; Koestler, Devin C; Butler, Rondi A; Hansen, Helen M; Wiencke, John K; Kelsey, Karl T; Christensen, Brock C

    2018-05-29

    Genome-wide methylation arrays are powerful tools for assessing cell composition of complex mixtures. We compare three approaches to select reference libraries for deconvoluting neutrophil, monocyte, B-lymphocyte, natural killer, and CD4+ and CD8+ T-cell fractions based on blood-derived DNA methylation signatures assayed using the Illumina HumanMethylationEPIC array. The IDOL algorithm identifies a library of 450 CpGs, resulting in an average R 2  = 99.2 across cell types when applied to EPIC methylation data collected on artificial mixtures constructed from the above cell types. Of the 450 CpGs, 69% are unique to EPIC. This library has the potential to reduce unintended technical differences across array platforms.

  17. Creation of a Radiation Hard 0.13 Micron CMOS Library at IHP

    NASA Astrophysics Data System (ADS)

    Jagdhold, U.

    2010-08-01

    To support space applications we will develop an 0.13 micron CMOS library which should be radiation hard up to 200 krad. By introducing new radiation hard design rules we will minimize IC-level leakage and single event latchup (SEL). To reduce single event upset (SEU) we will add two p-MOS transistors to all flip flops. For reliability reasons we will use double contacts in all library elements. The additional rules and the library elements will then be integrated in our Cadence mixed signal designkit, Virtuoso IC6.1 [1]. A test chip will be produced with our in house 0.13 micron BiCMOS technology, see Ref. [2].Thereafter we will doing radiation tests according the ESA specifications, see Ref. [3], [4].

  18. Comparison of seven protocols to identify fecal contamination sources using Escherichia coli

    USGS Publications Warehouse

    Stoeckel, D.M.; Mathes, M.V.; Hyer, K.E.; Hagedorn, C.; Kator, H.; Lukasik, J.; O'Brien, T. L.; Fenger, T.W.; Samadpour, M.; Strickler, K.M.; Wiggins, B.A.

    2004-01-01

    Microbial source tracking (MST) uses various approaches to classify fecal-indicator microorganisms to source hosts. Reproducibility, accuracy, and robustness of seven phenotypic and genotypic MST protocols were evaluated by use of Escherichia coli from an eight-host library of known-source isolates and a separate, blinded challenge library. In reproducibility tests, measuring each protocol's ability to reclassify blinded replicates, only one (pulsed-field gel electrophoresis; PFGE) correctly classified all test replicates to host species; three protocols classified 48-62% correctly, and the remaining three classified fewer than 25% correctly. In accuracy tests, measuring each protocol's ability to correctly classify new isolates, ribotyping with EcoRI and PvuII approached 100% correct classification but only 6% of isolates were classified; four of the other six protocols (antibiotic resistance analysis, PFGE, and two repetitive-element PCR protocols) achieved better than random accuracy rates when 30-100% of challenge isolates were classified. In robustness tests, measuring each protocol's ability to recognize isolates from nonlibrary hosts, three protocols correctly classified 33-100% of isolates as "unknown origin," whereas four protocols classified all isolates to a source category. A relevance test, summarizing interpretations for a hypothetical water sample containing 30 challenge isolates, indicated that false-positive classifications would hinder interpretations for most protocols. Study results indicate that more representation in known-source libraries and better classification accuracy would be needed before field application. Thorough reliability assessment of classification results is crucial before and during application of MST protocols.

  19. Designing Tracking Software for Image-Guided Surgery Applications: IGSTK Experience

    PubMed Central

    Enquobahrie, Andinet; Gobbi, David; Turek, Matt; Cheng, Patrick; Yaniv, Ziv; Lindseth, Frank; Cleary, Kevin

    2009-01-01

    Objective Many image-guided surgery applications require tracking devices as part of their core functionality. The Image-Guided Surgery Toolkit (IGSTK) was designed and developed to interface tracking devices with software applications incorporating medical images. Methods IGSTK was designed as an open source C++ library that provides the basic components needed for fast prototyping and development of image-guided surgery applications. This library follows a component-based architecture with several components designed for specific sets of image-guided surgery functions. At the core of the toolkit is the tracker component that handles communication between a control computer and navigation device to gather pose measurements of surgical instruments present in the surgical scene. The representations of the tracked instruments are superimposed on anatomical images to provide visual feedback to the clinician during surgical procedures. Results The initial version of the IGSTK toolkit has been released in the public domain and several trackers are supported. The toolkit and related information are available at www.igstk.org. Conclusion With the increased popularity of minimally invasive procedures in health care, several tracking devices have been developed for medical applications. Designing and implementing high-quality and safe software to handle these different types of trackers in a common framework is a challenging task. It requires establishing key software design principles that emphasize abstraction, extensibility, reusability, fault-tolerance, and portability. IGSTK is an open source library that satisfies these needs for the image-guided surgery community. PMID:20037671

  20. An intuitive Python interface for Bioconductor libraries demonstrates the utility of language translators.

    PubMed

    Gautier, Laurent

    2010-12-21

    Computer languages can be domain-related, and in the case of multidisciplinary projects, knowledge of several languages will be needed in order to quickly implements ideas. Moreover, each computer language has relative strong points, making some languages better suited than others for a given task to be implemented. The Bioconductor project, based on the R language, has become a reference for the numerical processing and statistical analysis of data coming from high-throughput biological assays, providing a rich selection of methods and algorithms to the research community. At the same time, Python has matured as a rich and reliable language for the agile development of prototypes or final implementations, as well as for handling large data sets. The data structures and functions from Bioconductor can be exposed to Python as a regular library. This allows a fully transparent and native use of Bioconductor from Python, without one having to know the R language and with only a small community of translators required to know both. To demonstrate this, we have implemented such Python representations for key infrastructure packages in Bioconductor, letting a Python programmer handle annotation data, microarray data, and next-generation sequencing data. Bioconductor is now not solely reserved to R users. Building a Python application using Bioconductor functionality can be done just like if Bioconductor was a Python package. Moreover, similar principles can be applied to other languages and libraries. Our Python package is available at: http://pypi.python.org/pypi/rpy2-bioconductor-extensions/.

  1. Retrieval from long-term memory reduces working memory representations for visual features and their bindings.

    PubMed

    van Lamsweerde, Amanda E; Beck, Melissa R; Elliott, Emily M

    2015-02-01

    The ability to remember feature bindings is an important measure of the ability to maintain objects in working memory (WM). In this study, we investigated whether both object- and feature-based representations are maintained in WM. Specifically, we tested the hypotheses that retaining a greater number of feature representations (i.e., both as individual features and bound representations) results in a more robust representation of individual features than of feature bindings, and that retrieving information from long-term memory (LTM) into WM would cause a greater disruption to feature bindings. In four experiments, we examined the effects of retrieving a word from LTM on shape and color-shape binding change detection performance. We found that binding changes were more difficult to detect than individual-feature changes overall, but that the cost of retrieving a word from LTM was the same for both individual-feature and binding changes.

  2. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK

    PubMed Central

    2014-01-01

    Background Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system’s set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This “code-based” approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. Results As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. Conclusions The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts. PMID:24725437

  3. Discovery of Pod Shatter-Resistant Associated SNPs by Deep Sequencing of a Representative Library Followed by Bulk Segregant Analysis in Rapeseed

    PubMed Central

    Huang, Shunmou; Yang, Hongli; Zhan, Gaomiao; Wang, Xinfa; Liu, Guihua; Wang, Hanzhong

    2012-01-01

    Background Single nucleotide polymorphisms (SNPs) are an important class of genetic marker for target gene mapping. As of yet, there is no rapid and effective method to identify SNPs linked with agronomic traits in rapeseed and other crop species. Methodology/Principal Findings We demonstrate a novel method for identifying SNP markers in rapeseed by deep sequencing a representative library and performing bulk segregant analysis. With this method, SNPs associated with rapeseed pod shatter-resistance were discovered. Firstly, a reduced representation of the rapeseed genome was used. Genomic fragments ranging from 450–550 bp were prepared from the susceptible bulk (ten F2 plants with the silique shattering resistance index, SSRI <0.10) and the resistance bulk (ten F2 plants with SSRI >0.90), and also Solexa sequencing-produced 90 bp reads. Approximately 50 million of these sequence reads were assembled into contigs to a depth of 20-fold coverage. Secondly, 60,396 ‘simple SNPs’ were identified, and the statistical significance was evaluated using Fisher's exact test. There were 70 associated SNPs whose –log10 p value over 16 were selected to be further analyzed. The distribution of these SNPs appeared a tight cluster, which consisted of 14 associated SNPs within a 396 kb region on chromosome A09. Our evidence indicates that this region contains a major quantitative trait locus (QTL). Finally, two associated SNPs from this region were mapped on a major QTL region. Conclusions/Significance 70 associated SNPs were discovered and a major QTL for rapeseed pod shatter-resistance was found on chromosome A09 using our novel method. The associated SNP markers were used for mapping of the QTL, and may be useful for improving pod shatter-resistance in rapeseed through marker-assisted selection and map-based cloning. This approach will accelerate the discovery of major QTLs and the cloning of functional genes for important agronomic traits in rapeseed and other crop species. PMID:22529909

  4. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK.

    PubMed

    Wang, Kaier; Steyn-Ross, Moira L; Steyn-Ross, D Alistair; Wilson, Marcus T; Sleigh, Jamie W; Shiraishi, Yoichi

    2014-04-11

    Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system's set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This "code-based" approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts.

  5. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu; Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089; Ghanem, Roger G., E-mail: ghanem@usc.edu

    2017-07-15

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  6. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    NASA Astrophysics Data System (ADS)

    Tsilifis, Panagiotis; Ghanem, Roger G.

    2017-07-01

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  7. SINDA'85/FLUINT - SYSTEMS IMPROVED NUMERICAL DIFFERENCING ANALYZER AND FLUID INTEGRATOR (CONVEX VERSION)

    NASA Technical Reports Server (NTRS)

    Cullimore, B.

    1994-01-01

    SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow, capillary devices, user defined fluids, gravity and acceleration body forces on a fluid, and variable volumes. SINDA'85/FLUINT offers the following numerical solution techniques. The Finite difference formulation of the explicit method is the Forward-difference explicit approximation. The formulation of the implicit method is the Crank-Nicolson approximation. The program allows simulation of non-uniform heating and facilitates modeling thin-walled heat exchangers. The ability to model non-equilibrium behavior within two-phase volumes is included. Recent improvements to the program were made in modeling real evaporator-pumps and other capillary-assist evaporators. SINDA'85/FLUINT is available by license for a period of ten (10) years to approved licensees. The licensed program product includes the source code and one copy of the supporting documentation. Additional copies of the documentation may be purchased separately at any time. SINDA'85/FLUINT is written in FORTRAN 77. Version 2.3 has been implemented on Cray series computers running UNICOS, CONVEX computers running CONVEX OS, and DEC RISC computers running ULTRIX. Binaries are included with the Cray version only. The Cray version of SINDA'85/FLUINT also contains SINGE, an additional graphics program developed at Johnson Space Flight Center. Both source and executable code are provided for SINGE. Users wishing to create their own SINGE executable will also need the NASA Device Independent Graphics Library (NASADIG, previously known as SMDDIG; UNIX version, MSC-22001). The Cray and CONVEX versions of SINDA'85/FLUINT are available on 9-track 1600 BPI UNIX tar format magnetic tapes. The CONVEX version is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format. The DEC RISC ULTRIX version is available on a TK50 magnetic tape cartridge in UNIX tar format. SINDA was developed in 1971, and first had fluid capability added in 1975. SINDA'85/FLUINT version 2.3 was released in 1990.

  8. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries.

    PubMed

    Han, Bucong; Ma, Xiaohua; Zhao, Ruiying; Zhang, Jingxian; Wei, Xiaona; Liu, Xianghui; Liu, Xin; Zhang, Cunlong; Tan, Chunyan; Jiang, Yuyang; Chen, Yuzong

    2012-11-23

    Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. We evaluated support vector machines (SVM) as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33%) of 13.56M PubChem, 1,496 (0.89%) of 168 K MDDR, and 719 (7.73%) of 9,305 MDDR compounds similar to the known inhibitors. SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  9. MapReduce implementation of a hybrid spectral library-database search method for large-scale peptide identification.

    PubMed

    Kalyanaraman, Ananth; Cannon, William R; Latt, Benjamin; Baxter, Douglas J

    2011-11-01

    A MapReduce-based implementation called MR-MSPolygraph for parallelizing peptide identification from mass spectrometry data is presented. The underlying serial method, MSPolygraph, uses a novel hybrid approach to match an experimental spectrum against a combination of a protein sequence database and a spectral library. Our MapReduce implementation can run on any Hadoop cluster environment. Experimental results demonstrate that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. Speedup and other related performance studies are also reported on a 400-core Hadoop cluster using spectral datasets from environmental microbial communities as inputs. The source code along with user documentation are available on http://compbio.eecs.wsu.edu/MR-MSPolygraph. ananth@eecs.wsu.edu; william.cannon@pnnl.gov. Supplementary data are available at Bioinformatics online.

  10. The Canadian Hydrological Model (CHM): A multi-scale, variable-complexity hydrological model for cold regions

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2016-12-01

    There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.

  11. Outsourcing Library Technical Services. A How-To-Do-It Manual for Librarians. How-To-Do-It Manuals for Librarians, Number 69.

    ERIC Educational Resources Information Center

    Hirshon, Arnold; Winters, Barbara

    In the effort to reduce costs, improve productivity, enhance quality of services, and improve turnaround time for ordering, receiving, and cataloging new materials, libraries are increasingly turning to outsourcing as a strategic management tool to help them maximize use of their fiscal and human resources. This guide covers all aspects of…

  12. DMCA, CTEA, UCITA ... Oh My! An Overview of Copyright Law and Its Impact on Library Acquisitions and Collection Development of Electronic Resources

    ERIC Educational Resources Information Center

    Lee, Leslie A.; Wu, Michelle M.

    2007-01-01

    The purpose of traditional copyright law was to encourage the creation of works based on and to ensure reasonable access to original thought. Despite this harmonious intent, an intrinsic tension exists between libraries and copyright holders, as the former promotes "free" access to information that ultimately reduces the income of the…

  13. Reducing Undesirable Behaviors. Working with Behavioral Disorders: CEC Mini-Library.

    ERIC Educational Resources Information Center

    Polsgrove, Lewis, Ed.

    This booklet reviews the literature and offers procedures to reduce undesirable behavior in school settings. The following topics are addressed: definition of terms relating to behavior reduction procedures; environmental modification (changing the demands of a task, reducing the complexity of each step, or teaching a new skill); differential…

  14. Integration of Sparse Multi-modality Representation and Anatomical Constraint for Isointense Infant Brain MR Image Segmentation

    PubMed Central

    Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615

  15. Meaning in the avian auditory cortex: Neural representation of communication calls

    PubMed Central

    Elie, Julie E; Theunissen, Frédéric E

    2014-01-01

    Understanding how the brain extracts the behavioral meaning carried by specific vocalization types that can be emitted by various vocalizers and in different conditions is a central question in auditory research. This semantic categorization is a fundamental process required for acoustic communication and presupposes discriminative and invariance properties of the auditory system for conspecific vocalizations. Songbirds have been used extensively to study vocal learning, but the communicative function of all their vocalizations and their neural representation has yet to be examined. In our research, we first generated a library containing almost the entire zebra finch vocal repertoire and organized communication calls along 9 different categories based on their behavioral meaning. We then investigated the neural representations of these semantic categories in the primary and secondary auditory areas of 6 anesthetized zebra finches. To analyze how single units encode these call categories, we described neural responses in terms of their discrimination, selectivity and invariance properties. Quantitative measures for these neural properties were obtained using an optimal decoder based both on spike counts and spike patterns. Information theoretic metrics show that almost half of the single units encode semantic information. Neurons achieve higher discrimination of these semantic categories by being more selective and more invariant. These results demonstrate that computations necessary for semantic categorization of meaningful vocalizations are already present in the auditory cortex and emphasize the value of a neuro-ethological approach to understand vocal communication. PMID:25728175

  16. Reward Selectively Modulates the Lingering Neural Representation of Recently Attended Objects in Natural Scenes.

    PubMed

    Hickey, Clayton; Peelen, Marius V

    2017-08-02

    Theories of reinforcement learning and approach behavior suggest that reward can increase the perceptual salience of environmental stimuli, ensuring that potential predictors of outcome are noticed in the future. However, outcome commonly follows visual processing of the environment, occurring even when potential reward cues have long disappeared. How can reward feedback retroactively cause now-absent stimuli to become attention-drawing in the future? One possibility is that reward and attention interact to prime lingering visual representations of attended stimuli that sustain through the interval separating stimulus and outcome. Here, we test this idea using multivariate pattern analysis of fMRI data collected from male and female humans. While in the scanner, participants searched for examples of target categories in briefly presented pictures of cityscapes and landscapes. Correct task performance was followed by reward feedback that could randomly have either high or low magnitude. Analysis showed that high-magnitude reward feedback boosted the lingering representation of target categories while reducing the representation of nontarget categories. The magnitude of this effect in each participant predicted the behavioral impact of reward on search performance in subsequent trials. Other analyses show that sensitivity to reward-as expressed in a personality questionnaire and in reactivity to reward feedback in the dopaminergic midbrain-predicted reward-elicited variance in lingering target and nontarget representations. Credit for rewarding outcome thus appears to be assigned to the target representation, causing the visual system to become sensitized for similar objects in the future. SIGNIFICANCE STATEMENT How do reward-predictive visual stimuli become salient and attention-drawing? In the real world, reward cues precede outcome and reward is commonly received long after potential predictors have disappeared. How can the representation of environmental stimuli be affected by outcome that occurs later in time? Here, we show that reward acts on lingering representations of environmental stimuli that sustain through the interval between stimulus and outcome. Using naturalistic scene stimuli and multivariate pattern analysis of fMRI data, we show that reward boosts the representation of attended objects and reduces the representation of unattended objects. This interaction of attention and reward processing acts to prime vision for stimuli that may serve to predict outcome. Copyright © 2017 the authors 0270-6474/17/377297-08$15.00/0.

  17. An analysis of packaging formats for complex digtal objects: review of principles

    NASA Astrophysics Data System (ADS)

    Bekaert, Jeroen L.; Hochstenbach, Patrick; De Kooning, Emiel; Van de Walle, Rik

    2003-11-01

    During recent years, the number of organizations making digital information available has massively increased. This evolution encouraged the development of standards for packaging and encoding digital representations of complex objects (such as a digital music albums or digitized books and photograph albums). The primary goal of this article is to offer a method to compare these packaging standards and best practices tailored to the needs of the digital library community and the rising digital preservation programs. The contribution of this paper is the definition of an integrated reference model, based on both the OAIS framework and some additional significant properties that affect the quality, usability, encoding and behavior of the digital objects.

  18. Library management in the tight budget seventies. Problems, challenges, and opportunities.

    PubMed Central

    White, H S

    1977-01-01

    This paper examines changes in the management of university, special, and medical libraries brought about by the budget curtailments that followed the more affluent funding period of the mid-1960s. Based on a study conducted for the National Science Foundation by the Indiana University Graduate Library School, this paper deals with misconceptions that have arisen in the relationship between publishers and librarians, and differentiates between the priority perceptions of academic and of special librarians in the allocation of progressively scarcer resources. It concludes that libraries must make strong efforts to reduce the growing erosion of materials acquisitions budgets because of growing labor costs as a percentage of all library expenditures; that they must make a working reality of the resource-sharing mechanisms established through consortia and networks; and that they must use advanced evaluative techniques in the determination of which services and programs to implement, expand, and retain, and which to curtail and abandon. PMID:831887

  19. Evaluating a normalized conceptual representation produced from natural language patient discharge summaries.

    PubMed Central

    Zweigenbaum, P.; Bouaud, J.; Bachimont, B.; Charlet, J.; Boisvieux, J. F.

    1997-01-01

    The Menelas project aimed to produce a normalized conceptual representation from natural language patient discharge summaries. Because of the complex and detailed nature of conceptual representations, evaluating the quality of output of such a system is difficult. We present the method designed to measure the quality of Menelas output, and its application to the state of the French Menelas prototype as of the end of the project. We examine this method in the framework recently proposed by Friedman and Hripcsak. We also propose two conditions which enable to reduce the evaluation preparation workload. PMID:9357694

  20. Analysis of thin plates with holes by using exact geometrical representation within XFEM.

    PubMed

    Perumal, Logah; Tso, C P; Leng, Lim Thong

    2016-05-01

    This paper presents analysis of thin plates with holes within the context of XFEM. New integration techniques are developed for exact geometrical representation of the holes. Numerical and exact integration techniques are presented, with some limitations for the exact integration technique. Simulation results show that the proposed techniques help to reduce the solution error, due to the exact geometrical representation of the holes and utilization of appropriate quadrature rules. Discussion on minimum order of integration order needed to achieve good accuracy and convergence for the techniques presented in this work is also included.

  1. Development and Dissemination of the El Centro Health Disparities Measures Library.

    PubMed

    Mitrani, Victoria Behar; O'Day, Joanne E; Norris, Timothy B; Adebayo, Oluwamuyiwa Winifred

    2017-09-01

    This report describes the development and dissemination of a library of English measures, with Spanish translations, on constructs relevant to social determinants of health and behavioral health outcomes. The El Centro Measures Library is a product of the Center of Excellence for Health Disparities Research: El Centro, a program funded by the National Institute on Minority Health and Health Disparities of the U.S. National Institutes of Health. The library is aimed at enhancing capacity for minority health and health disparities research, particularly for Hispanics living in the United States and abroad. The open-access library of measures (available through www.miami.edu/sonhs/measureslibrary) contains brief descriptions of each measure, scoring information (where available), links to related peer-reviewed articles, and measure items in both languages. Links to measure websites where commercially available measures can be purchased are included, as is contact information for measures that require author permission. Links to several other measures libraries are hosted on the library website. Other researchers may contribute to the library. El Centro investigators began the library by electing to use a common set of measures across studies to assess demographic information, culture-related variables, proximal outcomes of interest, and major outcomes. The collection was expanded to include other health disparity research studies. In 2012, a formal process was developed to organize, expand, and centralize the library in preparation for a gradual process of dissemination to the national and international community of researchers. The library currently contains 61 measures encompassing 12 categories of constructs. Thus far, the library has been accessed 8,883 times (unique page views as generated by Google Analytics), and responses from constituencies of users and measure authors have been favorable. With the paucity of availability and accessibility of translated measures, behavioral nursing research focused on reducing health disparities can benefit from repositories of research instruments such as the El Centro Measures Library. © 2017 Sigma Theta Tau International.

  2. Turning negative memories around: Contingency versus devaluation techniques.

    PubMed

    Dibbets, Pauline; Lemmens, Anke; Voncken, Marisol

    2018-09-01

    It is assumed that fear responses can be altered by changing the contingency between a conditioned stimulus (CS) and an unconditioned stimulus (US), or by devaluing the present mental representation of the US. The aim of the present study was to compare the efficacy of contingency- and devaluation-based intervention techniques on the diminishment in - and return of fear. We hypothesized that extinction (EXT, contingency-based) would outperform devaluation-based techniques regarding contingency measures, but that devaluation-based techniques would be most effective in reducing the mental representation of the US. Additionally, we expected that incorporations of the US during devaluation would result in less reinstatement of the US averseness. Healthy participants received a fear conditioning paradigm followed by one of three interventions: extinction (EXT, contingency-based), imagery rescripting (ImRs, devaluation-based) or eye movement desensitization and reprocessing (EMDR, devaluation-based). A reinstatement procedure and test followed the next day. EXT was indeed most successful in diminishing contingency-based US expectancies and skin conductance responses (SCRs), but all interventions were equally successful in reducing the averseness of the mental US representation. After reinstatement EXT showed lowest expectancies and SCRs; no differences were observed between the conditions concerning the mental US representation. A partial reinforcement schedule was used, resulting in a vast amount of contingency unaware participants. Additionally, a non-clinical sample was used, which may limit the generalizability to clinical populations. EXT is most effective in reducing conditioned fear responses. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Synthesis of high-quality libraries of long (150mer) oligonucleotides by a novel depurination controlled process

    PubMed Central

    LeProust, Emily M.; Peck, Bill J.; Spirin, Konstantin; McCuen, Heather Brummel; Moore, Bridget; Namsaraev, Eugeni; Caruthers, Marvin H.

    2010-01-01

    We have achieved the ability to synthesize thousands of unique, long oligonucleotides (150mers) in fmol amounts using parallel synthesis of DNA on microarrays. The sequence accuracy of the oligonucleotides in such large-scale syntheses has been limited by the yields and side reactions of the DNA synthesis process used. While there has been significant demand for libraries of long oligos (150mer and more), the yields in conventional DNA synthesis and the associated side reactions have previously limited the availability of oligonucleotide pools to lengths <100 nt. Using novel array based depurination assays, we show that the depurination side reaction is the limiting factor for the synthesis of libraries of long oligonucleotides on Agilent Technologies’ SurePrint® DNA microarray platform. We also demonstrate how depurination can be controlled and reduced by a novel detritylation process to enable the synthesis of high quality, long (150mer) oligonucleotide libraries and we report the characterization of synthesis efficiency for such libraries. Oligonucleotide libraries prepared with this method have changed the economics and availability of several existing applications (e.g. targeted resequencing, preparation of shRNA libraries, site-directed mutagenesis), and have the potential to enable even more novel applications (e.g. high-complexity synthetic biology). PMID:20308161

  4. Expressed sequence tag analysis of adult human optic nerve for NEIBank: Identification of cell type and tissue markers

    PubMed Central

    Bernstein, Steven L; Guo, Yan; Peterson, Katherine; Wistow, Graeme

    2009-01-01

    Background The optic nerve is a pure white matter central nervous system (CNS) tract with an isolated blood supply, and is widely used in physiological studies of white matter response to various insults. We examined the gene expression profile of human optic nerve (ON) and, through the NEIBANK online resource, to provide a resource of sequenced verified cDNA clones. An un-normalized cDNA library was constructed from pooled human ON tissues and was used in expressed sequence tag (EST) analysis. Location of an abundant oligodendrocyte marker was examined by immunofluorescence. Quantitative real time polymerase chain reaction (qRT-PCR) and Western analysis were used to compare levels of expression for key calcium channel protein genes and protein product in primate and rodent ON. Results Our analyses revealed a profile similar in many respects to other white matter related tissues, but significantly different from previously available ON cDNA libraries. The previous libraries were found to include specific markers for other eye tissues, suggesting contamination. Immune/inflammatory markers were abundant in the new ON library. The oligodendrocyte marker QKI was abundant at the EST level. Immunofluorescence revealed that this protein is a useful oligodendrocyte cell-type marker in rodent and primate ONs. L-type calcium channel EST abundance was found to be particularly low. A qRT-PCR-based comparative mammalian species analysis reveals that L-type calcium channel expression levels are significantly lower in primate than in rodent ON, which may help account for the class-specific difference in responsiveness to calcium channel blocking agents. Several known eye disease genes are abundantly expressed in ON. Many genes associated with normal axonal function, mRNAs associated with axonal transport, inflammation and neuroprotection are observed. Conclusion We conclude that the new cDNA library is a faithful representation of human ON and EST data provide an initial overview of gene expression patterns in this tissue. The data provide clues for tissue-specific and species-specific properties of human ON that will help in design of therapeutic models. PMID:19778450

  5. Evidence of accelerated evolution and ectodermal-specific expression of presumptive BDS toxin cDNAs from Anemonia viridis.

    PubMed

    Nicosia, Aldo; Maggio, Teresa; Mazzola, Salvatore; Cuttitta, Angela

    2013-10-30

    Anemonia viridis is a widespread and extensively studied Mediterranean species of sea anemone from which a large number of polypeptide toxins, such as blood depressing substances (BDS) peptides, have been isolated. The first members of this class, BDS-1 and BDS-2, are polypeptides belonging to the β-defensin fold family and were initially described for their antihypertensive and antiviral activities. BDS-1 and BDS-2 are 43 amino acid peptides characterised by three disulfide bonds that act as neurotoxins affecting Kv3.1, Kv3.2 and Kv3.4 channel gating kinetics. In addition, BDS-1 inactivates the Nav1.7 and Nav1.3 channels. The development of a large dataset of A. viridis expressed sequence tags (ESTs) and the identification of 13 putative BDS-like cDNA sequences has attracted interest, especially as scientific and diagnostic tools. A comparison of BDS cDNA sequences showed that the untranslated regions are more conserved than the protein-coding regions. Moreover, the KA/KS ratios calculated for all pairwise comparisons showed values greater than 1, suggesting mechanisms of accelerated evolution. The structures of the BDS homologs were predicted by molecular modelling. All toxins possess similar 3D structures that consist of a triple-stranded antiparallel β-sheet and an additional small antiparallel β-sheet located downstream of the cleavage/maturation site; however, the orientation of the triple-stranded β-sheet appears to differ among the toxins. To characterise the spatial expression profile of the putative BDS cDNA sequences, tissue-specific cDNA libraries, enriched for BDS transcripts, were constructed. In addition, the proper amplification of ectodermal or endodermal markers ensured the tissue specificity of each library. Sequencing randomly selected clones from each library revealed ectodermal-specific expression of ten BDS transcripts, while transcripts of BDS-8, BDS-13, BDS-14 and BDS-15 failed to be retrieved, likely due to under-representation in our cDNA libraries. The calculation of the relative abundance of BDS transcripts in the cDNA libraries revealed that BDS-1, BDS-3, BDS-4, BDS-5 and BDS-6 are the most represented transcripts.

  6. [Social representations of TB patients on treatment discontinuation].

    PubMed

    Chirinos, Narda Estela Calsin; Meirelles, Betina Hörner Schlindwein; Bousfield, Andréa Barbará Silva

    2015-01-01

    To understand the social representations of people with TB who discontinued treatment in a Program of Tuberculosis Control. a descriptive study of qualitative approach conducted in the city of Lima, Peru. Data were collected from October to November 2012, through semi-structured interviews with eight individuals and the method used was thematic content analysis. The categories led to the construction of the social representation that the disease and the treatment bring suffering. This representation influences non-adherence to treatment and may increase the rates of treatment discontinuation. Educational strategies linked to social interaction processes, to subjectivity and to the patient context are needed to reduce the rates of discontinuation of tuberculosis treatment, relapses and multi-drug resistance. The evaluations point to new challenges that must be faced to achieve the Millennium Development Goals.

  7. Action Sounds Modulate Arm Reaching Movements

    PubMed Central

    Tajadura-Jiménez, Ana; Marquardt, Torsten; Swapp, David; Kitagawa, Norimichi; Bianchi-Berthouze, Nadia

    2016-01-01

    Our mental representations of our body are continuously updated through multisensory bodily feedback as we move and interact with our environment. Although it is often assumed that these internal models of body-representation are used to successfully act upon the environment, only a few studies have actually looked at how body-representation changes influence goal-directed actions, and none have looked at this in relation to body-representation changes induced by sound. The present work examines this question for the first time. Participants reached for a target object before and after adaptation periods during which the sounds produced by their hand tapping a surface were spatially manipulated to induce a representation of an elongated arm. After adaptation, participants’ reaching movements were performed in a way consistent with having a longer arm, in that their reaching velocities were reduced. These kinematic changes suggest auditory-driven recalibration of the somatosensory representation of the arm morphology. These results provide support to the hypothesis that one’s represented body size is used as a perceptual ruler to measure objects’ distances and to accordingly guide bodily actions. PMID:27695430

  8. Does my face FIT?: a face image task reveals structure and distortions of facial feature representation.

    PubMed

    Fuentes, Christina T; Runa, Catarina; Blanco, Xenxo Alvarez; Orvalho, Verónica; Haggard, Patrick

    2013-01-01

    Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.

  9. Enhancing Access to Reading Materials in Academic Libraries with Low Budgets Using a Book Bank System: Makerere University Library Experience

    ERIC Educational Resources Information Center

    Byamugisha, Helen M.

    2018-01-01

    Most universities are enrolling large numbers of students amidst dwindling budgets. This leads to reduced book- to student ratios. Makerere University started a Book Bank system to ensure availability of basic text books to students. The aim of this paper was to assess whether the Book Bank system was a viable strategy for enhancing access to…

  10. Affine.m—Mathematica package for computations in representation theory of finite-dimensional and affine Lie algebras

    NASA Astrophysics Data System (ADS)

    Nazarov, Anton

    2012-11-01

    In this paper we present Affine.m-a program for computations in representation theory of finite-dimensional and affine Lie algebras and describe implemented algorithms. The algorithms are based on the properties of weights and Weyl symmetry. Computation of weight multiplicities in irreducible and Verma modules, branching of representations and tensor product decomposition are the most important problems for us. These problems have numerous applications in physics and we provide some examples of these applications. The program is implemented in the popular computer algebra system Mathematica and works with finite-dimensional and affine Lie algebras. Catalogue identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENB_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24 844 No. of bytes in distributed program, including test data, etc.: 1 045 908 Distribution format: tar.gz Programming language: Mathematica. Computer: i386-i686, x86_64. Operating system: Linux, Windows, Mac OS, Solaris. RAM: 5-500 Mb Classification: 4.2, 5. Nature of problem: Representation theory of finite-dimensional Lie algebras has many applications in different branches of physics, including elementary particle physics, molecular physics, nuclear physics. Representations of affine Lie algebras appear in string theories and two-dimensional conformal field theory used for the description of critical phenomena in two-dimensional systems. Also Lie symmetries play a major role in a study of quantum integrable systems. Solution method: We work with weights and roots of finite-dimensional and affine Lie algebras and use Weyl symmetry extensively. Central problems which are the computations of weight multiplicities, branching and fusion coefficients are solved using one general recurrent algorithm based on generalization of Weyl character formula. We also offer alternative implementation based on the Freudenthal multiplicity formula which can be faster in some cases. Restrictions: Computational complexity grows fast with the rank of an algebra, so computations for algebras of ranks greater than 8 are not practical. Unusual features: We offer the possibility of using a traditional mathematical notation for the objects in representation theory of Lie algebras in computations if Affine.m is used in the Mathematica notebook interface. Running time: From seconds to days depending on the rank of the algebra and the complexity of the representation.

  11. MagnaportheDB: a federated solution for integrating physical and genetic map data with BAC end derived sequences for the rice blast fungus Magnaporthe grisea.

    PubMed

    Martin, Stanton L; Blackmon, Barbara P; Rajagopalan, Ravi; Houfek, Thomas D; Sceeles, Robert G; Denn, Sheila O; Mitchell, Thomas K; Brown, Douglas E; Wing, Rod A; Dean, Ralph A

    2002-01-01

    We have created a federated database for genome studies of Magnaporthe grisea, the causal agent of rice blast disease, by integrating end sequence data from BAC clones, genetic marker data and BAC contig assembly data. A library of 9216 BAC clones providing >25-fold coverage of the entire genome was end sequenced and fingerprinted by HindIII digestion. The Image/FPC software package was then used to generate an assembly of 188 contigs covering >95% of the genome. The database contains the results of this assembly integrated with hybridization data of genetic markers to the BAC library. AceDB was used for the core database engine and a MySQL relational database, populated with numerical representations of BAC clones within FPC contigs, was used to create appropriately scaled images. The database is being used to facilitate sequencing efforts. The database also allows researchers mapping known genes or other sequences of interest, rapid and easy access to the fundamental organization of the M.grisea genome. This database, MagnaportheDB, can be accessed on the web at http://www.cals.ncsu.edu/fungal_genomics/mgdatabase/int.htm.

  12. BiNChE: a web tool and library for chemical enrichment analysis based on the ChEBI ontology.

    PubMed

    Moreno, Pablo; Beisken, Stephan; Harsha, Bhavana; Muthukrishnan, Venkatesh; Tudose, Ilinca; Dekker, Adriano; Dornfeldt, Stefanie; Taruttis, Franziska; Grosse, Ivo; Hastings, Janna; Neumann, Steffen; Steinbeck, Christoph

    2015-02-21

    Ontology-based enrichment analysis aids in the interpretation and understanding of large-scale biological data. Ontologies are hierarchies of biologically relevant groupings. Using ontology annotations, which link ontology classes to biological entities, enrichment analysis methods assess whether there is a significant over or under representation of entities for ontology classes. While many tools exist that run enrichment analysis for protein sets annotated with the Gene Ontology, there are only a few that can be used for small molecules enrichment analysis. We describe BiNChE, an enrichment analysis tool for small molecules based on the ChEBI Ontology. BiNChE displays an interactive graph that can be exported as a high-resolution image or in network formats. The tool provides plain, weighted and fragment analysis based on either the ChEBI Role Ontology or the ChEBI Structural Ontology. BiNChE aids in the exploration of large sets of small molecules produced within Metabolomics or other Systems Biology research contexts. The open-source tool provides easy and highly interactive web access to enrichment analysis with the ChEBI ontology tool and is additionally available as a standalone library.

  13. Increasing High School Students' Chemistry Performance and Reducing Cognitive Load through an Instructional Strategy Based on the Interaction of Multiple Levels of Knowledge Representation

    ERIC Educational Resources Information Center

    Milenkovic´, Dus?ica D.; Segedinac, Mirjana D.; Hrin, Tamara N.

    2014-01-01

    The central goal of this study was to examine the extent to which a teaching approach focused on the interaction between macroscopic, submicroscopic, and symbolic levels of chemistry representations could affect high school students' performance in the field of inorganic reactions, as well as to examine how the applied instruction influences…

  14. The Social Representations of Students on the Assessment of Universities' Quality: The Influence of Market- and Managerialism-Driven Discourse

    ERIC Educational Resources Information Center

    Cardoso, Sonia; Santiago, Rui; Sarrico, Claudia S.

    2012-01-01

    Although students are considered major actors in the quality assessment of universities, the way they perceive this process and the meanings they ascribe to it are still neglected as a research subject. This article aims to reduce this gap by focusing on the social representations of students on quality assessment. Specifically, it tries to…

  15. Use of Data Libraries for IAEA Nuclear Security Assessment Methodologies (NUSAM) [section 5.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shull, D.; Lane, M.

    2015-06-23

    Data libraries are essential for the characterization of the facility and provide the documented input which enables the facility assessment results and subsequent conclusions. Data Libraries are historical, verifiable, quantified, and applicable collections of testing data on different types of barriers, sensors, cameras, procedures, and/or personnel. Data libraries are developed and maintained as part of any assessment program or process. Data is collected during the initial stages of facility characterization to aid in the model and/or simulation development process. Data library values may also be developed through the use of state testing centers and/or site resources by testing different typesmore » of barriers, sensors, cameras, procedures, and/or personnel. If no data exists, subject matter expert opinion and manufacturer's specifications/ testing values can be the basis for initially assigning values, but are generally less reliable and lack appropriate confidence measures. The use of existing data libraries that have been developed by a state testing organization reduces the assessment costs by establishing standard delay, detection and assessment values for use by multiple sites or facilities where common barriers and alarms systems exist.« less

  16. Spherical harmonics coefficients for ligand-based virtual screening of cyclooxygenase inhibitors.

    PubMed

    Wang, Quan; Birod, Kerstin; Angioni, Carlo; Grösch, Sabine; Geppert, Tim; Schneider, Petra; Rupp, Matthias; Schneider, Gisbert

    2011-01-01

    Molecular descriptors are essential for many applications in computational chemistry, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as comprehensive descriptors of molecular structure and properties. We investigate a spherical harmonics descriptor for shape-based virtual screening. We introduce and validate a partially rotation-invariant three-dimensional molecular shape descriptor based on the norm of spherical harmonics expansion coefficients. Using this molecular representation, we parameterize molecular surfaces, i.e., isosurfaces of spatial molecular property distributions. We validate the shape descriptor in a comprehensive retrospective virtual screening experiment. In a prospective study, we virtually screen a large compound library for cyclooxygenase inhibitors, using a self-organizing map as a pre-filter and the shape descriptor for candidate prioritization. 12 compounds were tested in vitro for direct enzyme inhibition and in a whole blood assay. Active compounds containing a triazole scaffold were identified as direct cyclooxygenase-1 inhibitors. This outcome corroborates the usefulness of spherical harmonics for representation of molecular shape in virtual screening of large compound collections. The combination of pharmacophore and shape-based filtering of screening candidates proved to be a straightforward approach to finding novel bioactive chemotypes with minimal experimental effort.

  17. The multifacet graphically contracted function method. II. A general procedure for the parameterization of orthogonal matrices and its application to arc factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely

    2014-08-14

    Practical algorithms are presented for the parameterization of orthogonal matrices Q ∈ R {sup m×n} in terms of the minimal number of essential parameters (φ). Both square n = m and rectangular n < m situations are examined. Two separate kinds of parameterizations are considered, one in which the individual columns of Q are distinct, and the other in which only Span(Q) is significant. The latter is relevant to chemical applications such as the representation of the arc factors in the multifacet graphically contracted function method and the representation of orbital coefficients in SCF and DFT methods. The parameterizations aremore » represented formally using products of elementary Householder reflector matrices. Standard mathematical libraries, such as LAPACK, may be used to perform the basic low-level factorization, reduction, and other algebraic operations. Some care must be taken with the choice of phase factors in order to ensure stability and continuity. The transformation of gradient arrays between the Q and (φ) parameterizations is also considered. Operation counts for all factorizations and transformations are determined. Numerical results are presented which demonstrate the robustness, stability, and accuracy of these algorithms.« less

  18. Medical Concept Normalization in Social Media Posts with Recurrent Neural Networks.

    PubMed

    Tutubalina, Elena; Miftahutdinov, Zulfat; Nikolenko, Sergey; Malykh, Valentin

    2018-06-12

    Text mining of scientific libraries and social media has already proven itself as a reliable tool for drug repurposing and hypothesis generation. The task of mapping a disease mention to a concept in a controlled vocabulary, typically to the standard thesaurus in the Unified Medical Language System (UMLS), is known as medical concept normalization. This task is challenging due to the differences in the use of medical terminology between health care professionals and social media texts coming from the lay public. To bridge this gap, we use sequence learning with recurrent neural networks and semantic representation of one- or multi-word expressions: we develop end-to-end architectures directly tailored to the task, including bidirectional Long Short-Term Memory, Gated Recurrent Units with an attention mechanism, and additional semantic similarity features based on UMLS. Our evaluation against a standard benchmark shows that recurrent neural networks improve results over an effective baseline for classification based on convolutional neural networks. A qualitative examination of mentions discovered in a dataset of user reviews collected from popular online health information platforms as well as a quantitative evaluation both show improvements in the semantic representation of health-related expressions in social media. Copyright © 2018. Published by Elsevier Inc.

  19. MoleculeNet: a benchmark for molecular machine learning† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c7sc02664a

    PubMed Central

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N.; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S.; Leswing, Karl

    2017-01-01

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm. PMID:29629118

  20. Diversity of bacteria and glycosyl hydrolase family 48 genes in cellulolytic consortia enriched from thermophilic biocompost.

    PubMed

    Izquierdo, Javier A; Sizova, Maria V; Lynd, Lee R

    2010-06-01

    The enrichment from nature of novel microbial communities with high cellulolytic activity is useful in the identification of novel organisms and novel functions that enhance the fundamental understanding of microbial cellulose degradation. In this work we identify predominant organisms in three cellulolytic enrichment cultures with thermophilic compost as an inoculum. Community structure based on 16S rRNA gene clone libraries featured extensive representation of clostridia from cluster III, with minor representation of clostridial clusters I and XIV and a novel Lutispora species cluster. Our studies reveal different levels of 16S rRNA gene diversity, ranging from 3 to 18 operational taxonomic units (OTUs), as well as variability in community membership across the three enrichment cultures. By comparison, glycosyl hydrolase family 48 (GHF48) diversity analyses revealed a narrower breadth of novel clostridial genes associated with cultured and uncultured cellulose degraders. The novel GHF48 genes identified in this study were related to the novel clostridia Clostridium straminisolvens and Clostridium clariflavum, with one cluster sharing as little as 73% sequence similarity with the closest known relative. In all, 14 new GHF48 gene sequences were added to the known diversity of 35 genes from cultured species.

  1. Bacterial community composition in different sediments from the Eastern Mediterranean Sea: a comparison of four 16S ribosomal DNA clone libraries.

    PubMed

    Polymenakou, Paraskevi N; Bertilsson, Stefan; Tselepides, Anastasios; Stephanou, Euripides G

    2005-10-01

    The regional variability of sediment bacterial community composition and diversity was studied by comparative analysis of four large 16S ribosomal DNA (rDNA) clone libraries from sediments in different regions of the Eastern Mediterranean Sea (Thermaikos Gulf, Cretan Sea, and South lonian Sea). Amplified rDNA restriction analysis of 664 clones from the libraries indicate that the rDNA richness and evenness was high: for example, a near-1:1 relationship among screened clones and number of unique restriction patterns when up to 190 clones were screened for each library. Phylogenetic analysis of 207 bacterial 16S rDNA sequences from the sediment libraries demonstrated that Gamma-, Delta-, and Alphaproteobacteria, Holophaga/Acidobacteria, Planctomycetales, Actinobacteria, Bacteroidetes, and Verrucomicrobia were represented in all four libraries. A few clones also grouped with the Betaproteobacteria, Nitrospirae, Spirochaetales, Chlamydiae, Firmicutes, and candidate division OPl 1. The abundance of sequences affiliated with Gammaproteobacteria was higher in libraries from shallow sediments in the Thermaikos Gulf (30 m) and the Cretan Sea (100 m) compared to the deeper South Ionian station (2790 m). Most sequences in the four sediment libraries clustered with uncultured 16S rDNA phylotypes from marine habitats, and many of the closest matches were clones from hydrocarbon seeps, benzene-mineralizing consortia, sulfate reducers, sulk oxidizers, and ammonia oxidizers. LIBSHUFF statistics of 16S rDNA gene sequences from the four libraries revealed major differences, indicating either a very high richness in the sediment bacterial communities or considerable variability in bacterial community composition among regions, or both.

  2. Algorithm 699 - A new representation of Patterson's quadrature formulae

    NASA Technical Reports Server (NTRS)

    Krogh, Fred T.; Van Snyder, W.

    1991-01-01

    A method is presented to reduce the number of coefficients necessary to represent Patterson's quadrature formulae. It also reduces the amount of storage necessary for storing function values, and produces slightly smaller error in evaluating the formulae.

  3. LLMapReduce: Multi-Level Map-Reduce for High Performance Data Analysis

    DTIC Science & Technology

    2016-05-23

    LLMapReduce works with several schedulers such as SLURM, Grid Engine and LSF. Keywords—LLMapReduce; map-reduce; performance; scheduler; Grid Engine ...SLURM; LSF I. INTRODUCTION Large scale computing is currently dominated by four ecosystems: supercomputing, database, enterprise , and big data [1...interconnects [6]), High performance math libraries (e.g., BLAS [7, 8], LAPACK [9], ScaLAPACK [10]) designed to exploit special processing hardware, High

  4. Risk Taking Under the Influence: A Fuzzy-Trace Theory of Emotion in Adolescence

    PubMed Central

    Rivers, Susan E.; Reyna, Valerie F.; Mills, Britain

    2008-01-01

    Fuzzy-trace theory explains risky decision making in children, adolescents, and adults, incorporating social and cultural factors as well as differences in impulsivity. Here, we provide an overview of the theory, including support for counterintuitive predictions (e.g., when adolescents “rationally” weigh costs and benefits, risk taking increases, but it decreases when the core gist of a decision is processed). Then, we delineate how emotion shapes adolescent risk taking—from encoding of representations of options, to retrieval of values/principles, to application of those values/principles to representations of options. Our review indicates that: (i) Gist representations often incorporate emotion including valence, arousal, feeling states, and discrete emotions; and (ii) Emotion determines whether gist or verbatim representations are processed. We recommend interventions to reduce unhealthy risk-taking that inculcate stable gist representations, enabling adolescents to identify quickly and automatically danger even when experiencing emotion, which differs sharply from traditional approaches emphasizing deliberation and precise analysis. PMID:19255597

  5. Unrealistic representations of "the self": A cognitive neuroscience assessment of anosognosia for memory deficit.

    PubMed

    Berlingeri, Manuela; Ravasio, Alessandra; Cranna, Silvia; Basilico, Stefania; Sberna, Maurizio; Bottini, Gabriella; Paulesu, Eraldo

    2015-12-01

    Three cognitive components may play a crucial role in both memory awareness and in anosognosia for memory deficit (AMD): (1) a personal data base (PDB), i.e., a memory store that contains "semantic" representations about the self, (2) monitoring processes (MPs) and (3) an explicit evaluation system (EES), or comparator, that assesses and binds the representations stored in the PDB with information obtained from the environment. We compared both the behavior and the functional connectivity (as assessed by resting-state fMRI) of AMD patients with aware patients and healthy controls. We found that AMD is associated with an impoverished PDB, while MPs are necessary to successfully update the PDB. AMD was associated with reduced functional connectivity within both the default-mode network and in a network that includes the left lateral temporal cortex, the hippocampus and the insula. The reduced connectivity between the hippocampus and the insular cortex was correlated with AMD severity. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Lesser Neural Pattern Similarity across Repeated Tests Is Associated with Better Long-Term Memory Retention.

    PubMed

    Karlsson Wirebring, Linnea; Wiklund-Hörnqvist, Carola; Eriksson, Johan; Andersson, Micael; Jonsson, Bert; Nyberg, Lars

    2015-07-01

    Encoding and retrieval processes enhance long-term memory performance. The efficiency of encoding processes has recently been linked to representational consistency: the reactivation of a representation that gets more specific each time an item is further studied. Here we examined the complementary hypothesis of whether the efficiency of retrieval processes also is linked to representational consistency. Alternatively, recurrent retrieval might foster representational variability--the altering or adding of underlying memory representations. Human participants studied 60 Swahili-Swedish word pairs before being scanned with fMRI the same day and 1 week later. On Day 1, participants were tested three times on each word pair, and on Day 7 each pair was tested once. A BOLD signal change in right superior parietal cortex was associated with subsequent memory on Day 1 and with successful long-term retention on Day 7. A representational similarity analysis in this parietal region revealed that beneficial recurrent retrieval was associated with representational variability, such that the pattern similarity on Day 1 was lower for retrieved words subsequently remembered compared with those subsequently forgotten. This was mirrored by a monotonically decreased BOLD signal change in dorsolateral prefrontal cortex on Day 1 as a function of repeated successful retrieval for words subsequently remembered, but not for words subsequently forgotten. This reduction in prefrontal response could reflect reduced demands on cognitive control. Collectively, the results offer novel insights into why memory retention benefits from repeated retrieval, and they suggest fundamental differences between repeated study and repeated testing. Repeated testing is known to produce superior long-term retention of the to-be-learned material compared with repeated encoding and other learning techniques, much because it fosters repeated memory retrieval. This study demonstrates that repeated memory retrieval might strengthen memory by inducing more differentiated or elaborated memory representations in the parietal cortex, and at the same time reducing demands on prefrontal-cortex-mediated cognitive control processes during retrieval. The findings contrast with recent demonstrations that repeated encoding induces less differentiated or elaborated memory representations. Together, this study suggests a potential neurocognitive explanation of why repeated retrieval is more beneficial for long-term retention than repeated encoding, a phenomenon known as the testing effect. Copyright © 2015 the authors 0270-6474/15/359595-08$15.00/0.

  7. Wigner functions for nonparaxial, arbitrarily polarized electromagnetic wave fields in free space.

    PubMed

    Alonso, Miguel A

    2004-11-01

    New representations are defined for describing electromagnetic wave fields in free space exactly in terms of rays for any wavelength, level of coherence or polarization, and numerical aperture, as long as there are no evanescent components. These representations correspond to tensors assigned to each ray such that the electric and magnetic energy densities, the Poynting vector, and the polarization properties of the field correspond to simple integrals involving these tensors for the rays that go through the specified point. For partially coherent fields, the ray-based approach provided by the new representations can reduce dramatically the computation times for the physical properties mentioned earlier.

  8. Bounds on the polymer scale from gamma ray bursts

    NASA Astrophysics Data System (ADS)

    Bonder, Yuri; Garcia-Chung, Angel; Rastgoo, Saeed

    2017-11-01

    The polymer representations, which are partially motivated by loop quantum gravity, have been suggested as alternative schemes to quantize the matter fields. Here we apply a version of the polymer representations to the free electromagnetic field, in a reduced phase space setting, and derive the corresponding effective (i.e., semiclassical) Hamiltonian. We study the propagation of an electromagnetic pulse, and we confront our theoretical results with gamma ray burst observations. This comparison reveals that the dimensionless polymer scale must be smaller than 4 ×10-35 , casting doubts on the possibility that the matter fields are quantized with the polymer representation we employed.

  9. Libraries of High and Mid-Resolution Spectra of F, G, K, and M Field Stars

    NASA Astrophysics Data System (ADS)

    Montes, D.

    1998-06-01

    I have compiled here the three libraries of high and mid-resolution optical spectra of late-type stars I have recently published. The libraries include F, G, K and M field stars, from dwarfs to giants. The spectral coverage is from 3800 to 1000 Å, with spectral resolution ranging from 0.09 to 3.0 Å. These spectra include many of the spectral lines most widely used as optical and near-infrared indicators of chromospheric activity. The spectra have been obtained with the aim of providing a library of high and mid-resolution spectra to be used in the study of active chromosphere stars by applying a spectral subtraction technique. However, the data set presented here can also be utilized in a wide variety of ways. A digital version of all the fully reduced spectra is available via FTP and the World Wide Web (WWW) in FITS format.

  10. Radiation Hard 0.13 Micron CMOS Library at IHP

    NASA Astrophysics Data System (ADS)

    Jagdhold, U.

    2013-08-01

    To support space applications we have developed an 0.13 micron CMOS library which should be radiation hard up to 200 krad. The article describes the concept to come to a radiation hard digital circuit and was introduces in 2010 [1]. By introducing new radiation hard design rules we will minimize IC-level leakage and single event latch-up (SEL). To reduce single event upset (SEU) we add two p-MOS transistors to all flip flops. For reliability reasons we use double contacts in all library elements. The additional rules and the library elements are integrated in our Cadence mixed signal design kit, “Virtuoso” IC6.1 [2]. A test chip is produced with our in house 0.13 micron BiCMOS technology, see Ref. [3]. As next step we will doing radiation tests according the european space agency (ESA) specifications, see Ref. [4], [5].

  11. VO-compliant libraries of high resolution spectra of cool stars

    NASA Astrophysics Data System (ADS)

    Montes, D.

    2008-10-01

    In this contribution we describe a Virtual Observatory (VO) compliant version of the libraries of high resolution spectra of cool stars described by Montes et al. (1997; 1998; and 1999). Since their publication the fully reduced spectra in FITS format have been available via ftp and in the World Wide Web. However, in the VO all the spectra will be accessible using a common web interface following the standards of the International Virtual Observatory Alliance (IVOA). These libraries include F, G, K and M field stars, from dwarfs to giants. The spectral coverage is from 3800 to 10000 Å, with spectral resolution ranging from 0.09 to 3.0 Å.

  12. The effects of mental representation on performance in a navigation task

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel; Healy, Alice F.

    2002-01-01

    In three experiments, we investigated the mental representations employed when instructions were followed that involved navigation in a space displayed as a grid on a computer screen. Performance was affected much more by the number of instructional units than by the number of words per unit. Performance in a three-dimensional space was independent of the number of dimensions along which participants navigated. However, memory for and accuracy in following the instructions were reduced when the task required mentally representing a three-dimensional space, as compared with representing a two-dimensional space, although the words used in the instructions were identical in the two cases. These results demonstrate the interdependence of verbal and spatial memory representations, because individuals' immediate memory for verbal navigation instructions is affected by their mental representation of the space referred to by the instructions.

  13. A Complex Prime Numerical Representation of Amino Acids for Protein Function Comparison.

    PubMed

    Chen, Duo; Wang, Jiasong; Yan, Ming; Bao, Forrest Sheng

    2016-08-01

    Computationally assessing the functional similarity between proteins is an important task of bioinformatics research. It can help molecular biologists transfer knowledge on certain proteins to others and hence reduce the amount of tedious and costly benchwork. Representation of amino acids, the building blocks of proteins, plays an important role in achieving this goal. Compared with symbolic representation, representing amino acids numerically can expand our ability to analyze proteins, including comparing the functional similarity of them. Among the state-of-the-art methods, electro-ion interaction pseudopotential (EIIP) is widely adopted for the numerical representation of amino acids. However, it could suffer from degeneracy that two different amino acid sequences have the same numerical representation, due to the design of EIIP. In light of this challenge, we propose a complex prime numerical representation (CPNR) of amino acids, inspired by the similarity between a pattern among prime numbers and the number of codons of amino acids. To empirically assess the effectiveness of the proposed method, we compare CPNR against EIIP. Experimental results demonstrate that the proposed method CPNR always achieves better performance than EIIP. We also develop a framework to combine the advantages of CPNR and EIIP, which enables us to improve the performance and study the unique characteristics of different representations.

  14. Biased selection of propagation-related TUPs from phage display peptide libraries.

    PubMed

    Zade, Hesam Motaleb; Keshavarz, Reihaneh; Shekarabi, Hosna Sadat Zahed; Bakhshinejad, Babak

    2017-08-01

    Phage display is rapidly advancing as a screening strategy in drug discovery and drug delivery. Phage-encoded combinatorial peptide libraries can be screened through the affinity selection procedure of biopanning to find pharmaceutically relevant cell-specific ligands. However, the unwanted enrichment of target-unrelated peptides (TUPs) with no true affinity for the target presents an important barrier to the successful screening of phage display libraries. Propagation-related TUPs (Pr-TUPs) are an emerging but less-studied category of phage display-derived false-positive hits that are displayed on the surface of clones with faster propagation rates. Despite long regarded as an unbiased selection system, accumulating evidence suggests that biopanning may create biological bias toward selection of phage clones with certain displayed peptides. This bias can be dependent on or independent of the displayed sequence and may act as a major driving force for the isolation of fast-growing clones. Sequence-dependent bias is reflected by censorship or over-representation of some amino acids in the displayed peptide and sequence-independent bias is derived from either point mutations or rare recombination events occurring in the phage genome. It is of utmost interest to clean biopanning data by identifying and removing Pr-TUPs. Experimental and bioinformatic approaches can be exploited for Pr-TUP discovery. With no doubt, obtaining deeper insight into how Pr-TUPs emerge during biopanning and how they could be detected provides a basis for using cell-targeting peptides isolated from phage display screening in the development of disease-specific diagnostic and therapeutic platforms.

  15. An intuitive Python interface for Bioconductor libraries demonstrates the utility of language translators

    PubMed Central

    2010-01-01

    Background Computer languages can be domain-related, and in the case of multidisciplinary projects, knowledge of several languages will be needed in order to quickly implements ideas. Moreover, each computer language has relative strong points, making some languages better suited than others for a given task to be implemented. The Bioconductor project, based on the R language, has become a reference for the numerical processing and statistical analysis of data coming from high-throughput biological assays, providing a rich selection of methods and algorithms to the research community. At the same time, Python has matured as a rich and reliable language for the agile development of prototypes or final implementations, as well as for handling large data sets. Results The data structures and functions from Bioconductor can be exposed to Python as a regular library. This allows a fully transparent and native use of Bioconductor from Python, without one having to know the R language and with only a small community of translators required to know both. To demonstrate this, we have implemented such Python representations for key infrastructure packages in Bioconductor, letting a Python programmer handle annotation data, microarray data, and next-generation sequencing data. Conclusions Bioconductor is now not solely reserved to R users. Building a Python application using Bioconductor functionality can be done just like if Bioconductor was a Python package. Moreover, similar principles can be applied to other languages and libraries. Our Python package is available at: http://pypi.python.org/pypi/rpy2-bioconductor-extensions/ PMID:21210978

  16. Noise-aware dictionary-learning-based sparse representation framework for detection and removal of single and combined noises from ECG signal

    PubMed Central

    Ramkumar, Barathram; Sabarimalai Manikandan, M.

    2017-01-01

    Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal. PMID:28529758

  17. Performance assessment of a pre-partitioned adaptive chemistry approach in large-eddy simulation of turbulent flames

    NASA Astrophysics Data System (ADS)

    Pepiot, Perrine; Liang, Youwen; Newale, Ashish; Pope, Stephen

    2016-11-01

    A pre-partitioned adaptive chemistry (PPAC) approach recently developed and validated in the simplified framework of a partially-stirred reactor is applied to the simulation of turbulent flames using a LES/particle PDF framework. The PPAC approach was shown to simultaneously provide significant savings in CPU and memory requirements, two major limiting factors in LES/particle PDF. The savings are achieved by providing each particle in the PDF method with a specialized reduced representation and kinetic model adjusted to its changing composition. Both representation and model are identified efficiently from a pre-determined list using a low-dimensional binary-tree search algorithm, thereby keeping the run-time overhead associated with the adaptive strategy to a minimum. The Sandia D flame is used as benchmark to quantify the performance of the PPAC algorithm in a turbulent combustion setting. In particular, the CPU and memory benefits, the distribution of the various representations throughout the computational domain, and the relationship between the user-defined error tolerances used to derive the reduced representations and models and the actual errors observed in LES/PDF are characterized. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences under Award Number DE-FG02-90ER14128.

  18. Motor and mental training in older people: Transfer, interference, and associated functional neural responses.

    PubMed

    Boraxbekk, C J; Hagkvist, Filip; Lindner, Philip

    2016-08-01

    Learning new motor skills may become more difficult with advanced age. In the present study, we randomized 56 older individuals, including 30 women (mean age 70.6 years), to 6 weeks of motor training, mental (motor imagery) training, or a combination of motor and mental training of a finger tapping sequence. Performance improvements and post-training functional magnetic resonance imaging (fMRI) were used to investigate performance gains and associated underlying neural processes. Motor-only training and a combination of motor and mental training improved performance in the trained task more than mental-only training. The fMRI data showed that motor training was associated with a representation in the premotor cortex and mental training with a representation in the secondary visual cortex. Combining motor and mental training resulted in both premotor and visual cortex representations. During fMRI scanning, reduced performance was observed in the combined motor and mental training group, possibly indicating interference between the two training methods. We concluded that motor and motor imagery training in older individuals is associated with different functional brain responses. Furthermore, adding mental training to motor training did not result in additional performance gains compared to motor-only training and combining training methods may result in interference between representations, reducing performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Noise-aware dictionary-learning-based sparse representation framework for detection and removal of single and combined noises from ECG signal.

    PubMed

    Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M

    2017-02-01

    Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.

  20. On extending Kohn-Sham density functionals to systems with fractional number of electrons.

    PubMed

    Li, Chen; Lu, Jianfeng; Yang, Weitao

    2017-06-07

    We analyze four ways of formulating the Kohn-Sham (KS) density functionals with a fractional number of electrons, through extending the constrained search space from the Kohn-Sham and the generalized Kohn-Sham (GKS) non-interacting v-representable density domain for integer systems to four different sets of densities for fractional systems. In particular, these density sets are (I) ensemble interacting N-representable densities, (II) ensemble non-interacting N-representable densities, (III) non-interacting densities by the Janak construction, and (IV) non-interacting densities whose composing orbitals satisfy the Aufbau occupation principle. By proving the equivalence of the underlying first order reduced density matrices associated with these densities, we show that sets (I), (II), and (III) are equivalent, and all reduce to the Janak construction. Moreover, for functionals with the ensemble v-representable assumption at the minimizer, (III) reduces to (IV) and thus justifies the previous use of the Aufbau protocol within the (G)KS framework in the study of the ground state of fractional electron systems, as defined in the grand canonical ensemble at zero temperature. By further analyzing the Aufbau solution for different density functional approximations (DFAs) in the (G)KS scheme, we rigorously prove that there can be one and only one fractional occupation for the Hartree Fock functional, while there can be multiple fractional occupations for general DFAs in the presence of degeneracy. This has been confirmed by numerical calculations using the local density approximation as a representative of general DFAs. This work thus clarifies important issues on density functional theory calculations for fractional electron systems.

  1. In Between or in the Middle of Everything — How to Find the Pathway for a Small Department Library During Multiple Internal and External Change Factors

    NASA Astrophysics Data System (ADS)

    Akerholt, L. N.; Christensen, A.

    2010-10-01

    The Astrophysics Library is one of the smallest libraries at the University of Oslo, serving 10 masters students and approximately 50 academic employees at the Department of Theoretical Astrophysics. But the small size does not reduce the pressure on the institution when it comes to internal and external change factors. Change factors are understood as circumstances which influence the department library, but are outside the control of normal library routines.In this paper we explore these change factors and try to establish a strategy to find our "path" for the future. We find that internal change factors are quite easily handled, given enough time and proper funding, while the nature of external change factors makes it harder to decide on a future course.To illustrate the pressure exerted on the department library we give a brief summation of the challenges we have met regarding internal and external change factors. Our experiences indicate that we should establish closer cooperation with the academic staff and students, and also call for an improved communication strategy towards other institutions such as the Museum of University History as well as the National Library of Norway. We explore new forms of communication and suggest developing these in collaboration with the academic staff.

  2. Poisson Statistics of Combinatorial Library Sampling Predict False Discovery Rates of Screening

    PubMed Central

    2017-01-01

    Microfluidic droplet-based screening of DNA-encoded one-bead-one-compound combinatorial libraries is a miniaturized, potentially widely distributable approach to small molecule discovery. In these screens, a microfluidic circuit distributes library beads into droplets of activity assay reagent, photochemically cleaves the compound from the bead, then incubates and sorts the droplets based on assay result for subsequent DNA sequencing-based hit compound structure elucidation. Pilot experimental studies revealed that Poisson statistics describe nearly all aspects of such screens, prompting the development of simulations to understand system behavior. Monte Carlo screening simulation data showed that increasing mean library sampling (ε), mean droplet occupancy, or library hit rate all increase the false discovery rate (FDR). Compounds identified as hits on k > 1 beads (the replicate k class) were much more likely to be authentic hits than singletons (k = 1), in agreement with previous findings. Here, we explain this observation by deriving an equation for authenticity, which reduces to the product of a library sampling bias term (exponential in k) and a sampling saturation term (exponential in ε) setting a threshold that the k-dependent bias must overcome. The equation thus quantitatively describes why each hit structure’s FDR is based on its k class, and further predicts the feasibility of intentionally populating droplets with multiple library beads, assaying the micromixtures for function, and identifying the active members by statistical deconvolution. PMID:28682059

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diehl, Alexander D.; Meehan, Terrence F.; Bradford, Yvonne M.

    Background: The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Construction and content: Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologiesmore » in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. Utility and discussion: The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. Conclusions: The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the wider scientific community, and we continue to experience increased interest in the CL both among developers and within the user community.« less

  4. The Cell Ontology 2016: enhanced content, modularization, and ontology interoperability.

    PubMed

    Diehl, Alexander D; Meehan, Terrence F; Bradford, Yvonne M; Brush, Matthew H; Dahdul, Wasila M; Dougall, David S; He, Yongqun; Osumi-Sutherland, David; Ruttenberg, Alan; Sarntivijai, Sirarat; Van Slyke, Ceri E; Vasilevsky, Nicole A; Haendel, Melissa A; Blake, Judith A; Mungall, Christopher J

    2016-07-04

    The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologies in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the wider scientific community, and we continue to experience increased interest in the CL both among developers and within the user community.

  5. Web portal for dynamic creation and publication of teaching materials in multiple formats from a single source representation

    NASA Astrophysics Data System (ADS)

    Roganov, E. A.; Roganova, N. A.; Aleksandrov, A. I.; Ukolova, A. V.

    2017-01-01

    We implement a web portal which dynamically creates documents in more than 30 different formats including html, pdf and docx from a single original material source. It is obtained by using a number of free software such as Markdown (markup language), Pandoc (document converter), MathJax (library to display mathematical notation in web browsers), framework Ruby on Rails. The portal enables the creation of documents with a high quality visualization of mathematical formulas, is compatible with a mobile device and allows one to search documents by text or formula fragments. Moreover, it gives professors the ability to develop the latest technology educational materials, without qualified technicians' assistance, thus improving the quality of the whole educational process.

  6. QuakeML - An XML Schema for Seismology

    NASA Astrophysics Data System (ADS)

    Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.

    2004-12-01

    We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.

  7. Tandem Repeated Irritation Test (TRIT) Studies and Clinical Relevance: Post 2006.

    PubMed

    Reddy, Rasika; Maibach, Howard

    2018-06-11

    Single or multiple applications of irritants can lead to occupational contact dermatitis, and most commonly irritant contact dermatitis (ICD). Tandem irritation, the sequential application of two irritants to a target skin area, has been studied using the Tandem Repeated Irritation Test (TRIT) to provide a more accurate representation of skin irritation. Here we present an update to Kartono's review on tandem irritation studies since 2006 [1]. We surveyed the literature available on PubMed, Embase, Google Scholar, and the UCSF Dermatology library databases since 2006. The studies included discuss the tandem effects of common chemical irritants, organic solvents, occlusion as well as clinical relevance - and enlarge our ability to discern whether multiple chemical exposures are more or less likely to enhance irritation.

  8. Visual Tracking Based on Extreme Learning Machine and Sparse Representation

    PubMed Central

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-01-01

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359

  9. pvsR: An Open Source Interface to Big Data on the American Political Sphere.

    PubMed

    Matter, Ulrich; Stutzer, Alois

    2015-01-01

    Digital data from the political sphere is abundant, omnipresent, and more and more directly accessible through the Internet. Project Vote Smart (PVS) is a prominent example of this big public data and covers various aspects of U.S. politics in astonishing detail. Despite the vast potential of PVS' data for political science, economics, and sociology, it is hardly used in empirical research. The systematic compilation of semi-structured data can be complicated and time consuming as the data format is not designed for conventional scientific research. This paper presents a new tool that makes the data easily accessible to a broad scientific community. We provide the software called pvsR as an add-on to the R programming environment for statistical computing. This open source interface (OSI) serves as a direct link between a statistical analysis and the large PVS database. The free and open code is expected to substantially reduce the cost of research with PVS' new big public data in a vast variety of possible applications. We discuss its advantages vis-à-vis traditional methods of data generation as well as already existing interfaces. The validity of the library is documented based on an illustration involving female representation in local politics. In addition, pvsR facilitates the replication of research with PVS data at low costs, including the pre-processing of data. Similar OSIs are recommended for other big public databases.

  10. Multi-level meta-workflows: new concept for regularly occurring tasks in quantum chemistry.

    PubMed

    Arshad, Junaid; Hoffmann, Alexander; Gesing, Sandra; Grunzke, Richard; Krüger, Jens; Kiss, Tamas; Herres-Pawlis, Sonja; Terstyanszky, Gabor

    2016-01-01

    In Quantum Chemistry, many tasks are reoccurring frequently, e.g. geometry optimizations, benchmarking series etc. Here, workflows can help to reduce the time of manual job definition and output extraction. These workflows are executed on computing infrastructures and may require large computing and data resources. Scientific workflows hide these infrastructures and the resources needed to run them. It requires significant efforts and specific expertise to design, implement and test these workflows. Many of these workflows are complex and monolithic entities that can be used for particular scientific experiments. Hence, their modification is not straightforward and it makes almost impossible to share them. To address these issues we propose developing atomic workflows and embedding them in meta-workflows. Atomic workflows deliver a well-defined research domain specific function. Publishing workflows in repositories enables workflow sharing inside and/or among scientific communities. We formally specify atomic and meta-workflows in order to define data structures to be used in repositories for uploading and sharing them. Additionally, we present a formal description focused at orchestration of atomic workflows into meta-workflows. We investigated the operations that represent basic functionalities in Quantum Chemistry, developed the relevant atomic workflows and combined them into meta-workflows. Having these workflows we defined the structure of the Quantum Chemistry workflow library and uploaded these workflows in the SHIWA Workflow Repository.Graphical AbstractMeta-workflows and embedded workflows in the template representation.

  11. Advection modes by optimal mass transfer

    NASA Astrophysics Data System (ADS)

    Iollo, Angelo; Lombardi, Damiano

    2014-02-01

    Classical model reduction techniques approximate the solution of a physical model by a limited number of global modes. These modes are usually determined by variants of principal component analysis. Global modes can lead to reduced models that perform well in terms of stability and accuracy. However, when the physics of the model is mainly characterized by advection, the nonlocal representation of the solution by global modes essentially reduces to a Fourier expansion. In this paper we describe a method to determine a low-order representation of advection. This method is based on the solution of Monge-Kantorovich mass transfer problems. Examples of application to point vortex scattering, Korteweg-de Vries equation, and hurricane Dean advection are discussed.

  12. Profiling the NIH Small Molecule Repository for Compounds That Generate H2O2 by Redox Cycling in Reducing Environments

    PubMed Central

    2010-01-01

    We have screened the Library of Pharmacologically Active Compounds (LOPAC) and the National Institutes of Health (NIH) Small Molecule Repository (SMR) libraries in a horseradish peroxidase–phenol red (HRP-PR) H2O2 detection assay to identify redox cycling compounds (RCCs) capable of generating H2O2 in buffers containing dithiothreitol (DTT). Two RCCs were identified in the LOPAC set, the ortho-naphthoquinone β-lapachone and the para-naphthoquinone NSC 95397. Thirty-seven (0.02%) concentration-dependent RCCs were identified from 195,826 compounds in the NIH SMR library; 3 singleton structures, 9 ortho-quinones, 2 para-quinones, 4 pyrimidotriazinediones, 15 arylsulfonamides, 2 nitrothiophene-2-carboxylates, and 2 tolyl hydrazides. Sixty percent of the ortho-quinones and 80% of the pyrimidotriazinediones in the library were confirmed as RCCs. In contrast, only 3.9% of the para-quinones were confirmed as RCCs. Fifteen of the 251 arylsulfonamides in the library were confirmed as RCCs, and since we screened 17,868 compounds with a sulfonamide functional group we conclude that the redox cycling activity of the arylsulfonamide RCCs is due to peripheral reactive enone, aromatic, or heterocyclic functions. Cross-target queries of the University of Pittsburgh Drug Discovery Institute (UPDDI) and PubChem databases revealed that the RCCs exhibited promiscuous bioactivity profiles and have populated both screening databases with significantly higher numbers of active flags than non-RCCs. RCCs were promiscuously active against protein targets known to be susceptible to oxidation, but were also active in cell growth inhibition assays, and against other targets thought to be insensitive to oxidation. Profiling compound libraries or the hits from screening campaigns in the HRP-PR H2O2 detection assay significantly reduce the timelines and resources required to identify and eliminate promiscuous nuisance RCCs from the candidates for lead optimization. PMID:20070233

  13. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries

    PubMed Central

    2012-01-01

    Background Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. Results We evaluated support vector machines (SVM) as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33%) of 13.56M PubChem, 1,496 (0.89%) of 168 K MDDR, and 719 (7.73%) of 9,305 MDDR compounds similar to the known inhibitors. Conclusions SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates. PMID:23173901

  14. Placement-aware decomposition of a digital standard cells library for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Wassal, Amr G.; Sharaf, Heba; Hammouda, Sherif

    2012-11-01

    To continue scaling the circuit features down, Double Patterning (DP) technology is needed in 22nm technologies and lower. DP requires decomposing the layout features into two masks for pitch relaxation, such that the spacing between any two features on each mask is greater than the minimum allowed mask spacing. The relaxed pitches of each mask are then processed on two separate exposure steps. In many cases, post-layout decomposition fails to decompose the layout into two masks due to the presence of conflicts. Post-layout decomposition of a standard cells block can result in native conflicts inside the cells (internal conflict), or native conflicts on the boundary between two cells (boundary conflict). Resolving native conflicts requires a redesign and/or multiple iterations for the placement and routing phases to get a clean decomposition. Therefore, DP compliance must be considered in earlier phases, before getting the final placed cell block. The main focus of this paper is generating a library of decomposed standard cells to be used in a DP-aware placer. This library should contain all possible decompositions for each standard cell, i.e., these decompositions consider all possible combinations of boundary conditions. However, the large number of combinations of boundary conditions for each standard cell will significantly increase the processing time and effort required to obtain all possible decompositions. Therefore, an efficient methodology is required to reduce this large number of combinations. In this paper, three different reduction methodologies are proposed to reduce the number of different combinations processed to get the decomposed library. Experimental results show a significant reduction in the number of combinations and decompositions needed for the library processing. To generate and verify the proposed flow and methodologies, a prototype for a placement-aware DP-ready cell-library is developed with an optimized number of cell views.

  15. The Development of Shared Liking of Representational but not Abstract Art in Primary School Children and Their Justifications for Liking

    PubMed Central

    Rodway, Paul; Kirkham, Julie; Schepman, Astrid; Lambert, Jordana; Locke, Anastasia

    2016-01-01

    Understanding how aesthetic preferences are shared among individuals, and its developmental time course, is a fundamental question in aesthetics. It has been shown that semantic associations, in response to representational artworks, overlap more strongly among individuals than those generated by abstract artworks and that the emotional valence of the associations also overlaps more for representational artworks. This valence response may be a key driver in aesthetic appreciation. The current study tested predictions derived from the semantic association account in a developmental context. Twenty 4-, 6-, 8- and 10-year-old children (n = 80) were shown 20 artworks (10 representational, 10 abstract) and were asked to rate each artwork and to explain their decision. Cross-observer agreement in aesthetic preferences increased with age from 4–8 years for both abstract and representational art. However, after age 6 the level of shared appreciation for representational and abstract artworks diverged, with significantly higher levels of agreement for representational than abstract artworks at age 8 and 10. The most common justifications for representational artworks involved subject matter, while for abstract artworks formal artistic properties and color were the most commonly used justifications. Representational artwork also showed a significantly higher proportion of associations and emotional responses than abstract artworks. In line with predictions from developmental cognitive neuroscience, references to the artist as an agent increased between ages 4 and 6 and again between ages 6 and 8, following the development of Theory of Mind. The findings support the view that increased experience with representational content during the life span reduces inter-individual variation in aesthetic appreciation and increases shared preferences. In addition, brain and cognitive development appear to impact on art appreciation at milestone ages. PMID:26903834

  16. The Development of Shared Liking of Representational but not Abstract Art in Primary School Children and Their Justifications for Liking.

    PubMed

    Rodway, Paul; Kirkham, Julie; Schepman, Astrid; Lambert, Jordana; Locke, Anastasia

    2016-01-01

    Understanding how aesthetic preferences are shared among individuals, and its developmental time course, is a fundamental question in aesthetics. It has been shown that semantic associations, in response to representational artworks, overlap more strongly among individuals than those generated by abstract artworks and that the emotional valence of the associations also overlaps more for representational artworks. This valence response may be a key driver in aesthetic appreciation. The current study tested predictions derived from the semantic association account in a developmental context. Twenty 4-, 6-, 8- and 10-year-old children (n = 80) were shown 20 artworks (10 representational, 10 abstract) and were asked to rate each artwork and to explain their decision. Cross-observer agreement in aesthetic preferences increased with age from 4-8 years for both abstract and representational art. However, after age 6 the level of shared appreciation for representational and abstract artworks diverged, with significantly higher levels of agreement for representational than abstract artworks at age 8 and 10. The most common justifications for representational artworks involved subject matter, while for abstract artworks formal artistic properties and color were the most commonly used justifications. Representational artwork also showed a significantly higher proportion of associations and emotional responses than abstract artworks. In line with predictions from developmental cognitive neuroscience, references to the artist as an agent increased between ages 4 and 6 and again between ages 6 and 8, following the development of Theory of Mind. The findings support the view that increased experience with representational content during the life span reduces inter-individual variation in aesthetic appreciation and increases shared preferences. In addition, brain and cognitive development appear to impact on art appreciation at milestone ages.

  17. Expedient preparation of nazlinine and a small library of indole alkaloids using flow electrochemistry as an enabling technology.

    PubMed

    Kabeshov, Mikhail A; Musio, Biagia; Murray, Philip R D; Browne, Duncan L; Ley, Steven V

    2014-09-05

    An expedient synthesis of the indole alkaloid nazlinine is reported. Judicious choice of flow electrochemistry as an enabling technology has permitted the rapid generation of a small library of unnatural relatives of this biologically active molecule. Furthermore, by conducting the key electrochemical Shono oxidation in a flow cell, the loading of electrolyte can be significantly reduced to 20 mol % while maintaining a stable, broadly applicable process.

  18. Reducing Disparities in Cancer Screening and Prevention through Community-Based Participatory Research Partnerships with Local Libraries: A Comprehensive Dynamic Trial.

    PubMed

    Rapkin, Bruce D; Weiss, Elisa; Lounsbury, David; Michel, Tamara; Gordon, Alexis; Erb-Downward, Jennifer; Sabino-Laughlin, Eilleen; Carpenter, Alison; Schwartz, Carolyn E; Bulone, Linda; Kemeny, Margaret

    2017-09-01

    Reduction of cancer-related disparities requires strategies that link medically underserved communities to preventive care. In this community-based participatory research project, a public library system brought together stakeholders to plan and undertake programs to address cancer screening and risk behavior. This study was implemented over 48 months in 20 large urban neighborhoods, selected to reach diverse communities disconnected from care. In each neighborhood, Cancer Action Councils were organized to conduct a comprehensive dynamic trial, an iterative process of program planning, implementation and evaluation. This process was phased into neighborhoods in random, stepped-wedge sequence. Population-level outcomes included self-reported screening adherence and smoking cessation, based on street intercept interviews. Event-history regressions (n = 9374) demonstrated that adherence outcomes were associated with program implementation, as were mediators such as awareness of screening programs and cancer information seeking. Findings varied by ethnicity, and were strongest among respondents born outside the U.S. or least engaged in care. This intervention impacted health behavior in diverse, underserved and vulnerable neighborhoods. It has been sustained as a routine library system program for several years after conclusion of grant support. In sum, participatory research with the public library system offers a flexible, scalable approach to reduce cancer health disparities. © Society for Community Research and Action 2017.

  19. Structured grid technology to enable flow simulation in an integrated system environment

    NASA Astrophysics Data System (ADS)

    Remotigue, Michael Gerard

    An application-driven Computational Fluid Dynamics (CFD) environment needs flexible and general tools to effectively solve complex problems in a timely manner. In addition, reusable, portable, and maintainable specialized libraries will aid in rapidly developing integrated systems or procedures. The presented structured grid technology enables the flow simulation for complex geometries by addressing grid generation, grid decomposition/solver setup, solution, and interpretation. Grid generation is accomplished with the graphical, arbitrarily-connected, multi-block structured grid generation software system (GUM-B) developed and presented here. GUM-B is an integrated system comprised of specialized libraries for the graphical user interface and graphical display coupled with a solid-modeling data structure that utilizes a structured grid generation library and a geometric library based on Non-Uniform Rational B-Splines (NURBS). A presented modification of the solid-modeling data structure provides the capability for arbitrarily-connected regions between the grid blocks. The presented grid generation library provides algorithms that are reliable and accurate. GUM-B has been utilized to generate numerous structured grids for complex geometries in hydrodynamics, propulsors, and aerodynamics. The versatility of the libraries that compose GUM-B is also displayed in a prototype to automatically regenerate a grid for a free-surface solution. Grid decomposition and solver setup is accomplished with the graphical grid manipulation and repartition software system (GUMBO) developed and presented here. GUMBO is an integrated system comprised of specialized libraries for the graphical user interface and graphical display coupled with a structured grid-tools library. The described functions within the grid-tools library reduce the possibility of human error during decomposition and setup for the numerical solver by accounting for boundary conditions and connectivity. GUMBO is linked with a flow solver interface, to the parallel UNCLE code, to provide load balancing tools and solver setup. Weeks of boundary condition and connectivity specification and validation has been reduced to hours. The UNCLE flow solver is utilized for the solution of the flow field. To accelerate convergence toward a quick engineering answer, a full multigrid (FMG) approach coupled with UNCLE, which is a full approximation scheme (FAS), is presented. The prolongation operators used in the FMG-FAS method are compared. The procedure is demonstrated on a marine propeller in incompressible flow. Interpretation of the solution is accomplished by vortex feature detection. Regions of "Intrinsic Swirl" are located by interrogating the velocity gradient tensor for complex eigenvalues. The "Intrinsic Swirl" parameter is visualized on a solution of a marine propeller to determine if any vortical features are captured. The libraries and the structured grid technology presented herein are flexible and general enough to tackle a variety of complex applications. This technology has significantly enabled the capability of the ERC personnel to effectively calculate solutions for complex geometries.

  20. Graphical representations of data improve student understanding of measurement and uncertainty: An eye-tracking study

    NASA Astrophysics Data System (ADS)

    Susac, Ana; Bubic, Andreja; Martinjak, Petra; Planinic, Maja; Palmovic, Marijan

    2017-12-01

    Developing a better understanding of the measurement process and measurement uncertainty is one of the main goals of university physics laboratory courses. This study investigated the influence of graphical representation of data on student understanding and interpreting of measurement results. A sample of 101 undergraduate students (48 first year students and 53 third and fifth year students) from the Department of Physics, University of Zagreb were tested with a paper-and-pencil test consisting of eight multiple-choice test items about measurement uncertainties. One version of the test items included graphical representations of the measurement data. About half of the students solved that version of the test while the remaining students solved the same test without graphical representations. The results have shown that the students who had the graphical representation of data scored higher than their colleagues without graphical representation. In the second part of the study, measurements of eye movements were carried out on a sample of thirty undergraduate students from the Department of Physics, University of Zagreb while students were solving the same test on a computer screen. The results revealed that students who had the graphical representation of data spent considerably less time viewing the numerical data than the other group of students. These results indicate that graphical representation may be beneficial for data processing and data comparison. Graphical representation helps with visualization of data and therefore reduces the cognitive load on students while performing measurement data analysis, so students should be encouraged to use it.

  1. A fast and efficient python library for interfacing with the Biological Magnetic Resonance Data Bank.

    PubMed

    Smelter, Andrey; Astra, Morgan; Moseley, Hunter N B

    2017-03-17

    The Biological Magnetic Resonance Data Bank (BMRB) is a public repository of Nuclear Magnetic Resonance (NMR) spectroscopic data of biological macromolecules. It is an important resource for many researchers using NMR to study structural, biophysical, and biochemical properties of biological macromolecules. It is primarily maintained and accessed in a flat file ASCII format known as NMR-STAR. While the format is human readable, the size of most BMRB entries makes computer readability and explicit representation a practical requirement for almost any rigorous systematic analysis. To aid in the use of this public resource, we have developed a package called nmrstarlib in the popular open-source programming language Python. The nmrstarlib's implementation is very efficient, both in design and execution. The library has facilities for reading and writing both NMR-STAR version 2.1 and 3.1 formatted files, parsing them into usable Python dictionary- and list-based data structures, making access and manipulation of the experimental data very natural within Python programs (i.e. "saveframe" and "loop" records represented as individual Python dictionary data structures). Another major advantage of this design is that data stored in original NMR-STAR can be easily converted into its equivalent JavaScript Object Notation (JSON) format, a lightweight data interchange format, facilitating data access and manipulation using Python and any other programming language that implements a JSON parser/generator (i.e., all popular programming languages). We have also developed tools to visualize assigned chemical shift values and to convert between NMR-STAR and JSONized NMR-STAR formatted files. Full API Reference Documentation, User Guide and Tutorial with code examples are also available. We have tested this new library on all current BMRB entries: 100% of all entries are parsed without any errors for both NMR-STAR version 2.1 and version 3.1 formatted files. We also compared our software to three currently available Python libraries for parsing NMR-STAR formatted files: PyStarLib, NMRPyStar, and PyNMRSTAR. The nmrstarlib package is a simple, fast, and efficient library for accessing data from the BMRB. The library provides an intuitive dictionary-based interface with which Python programs can read, edit, and write NMR-STAR formatted files and their equivalent JSONized NMR-STAR files. The nmrstarlib package can be used as a library for accessing and manipulating data stored in NMR-STAR files and as a command-line tool to convert from NMR-STAR file format into its equivalent JSON file format and vice versa, and to visualize chemical shift values. Furthermore, the nmrstarlib implementation provides a guide for effectively JSONizing other older scientific formats, improving the FAIRness of data in these formats.

  2. Miniaturization Technologies for Efficient Single-Cell Library Preparation for Next-Generation Sequencing.

    PubMed

    Mora-Castilla, Sergio; To, Cuong; Vaezeslami, Soheila; Morey, Robert; Srinivasan, Srimeenakshi; Dumdie, Jennifer N; Cook-Andersen, Heidi; Jenkins, Joby; Laurent, Louise C

    2016-08-01

    As the cost of next-generation sequencing has decreased, library preparation costs have become a more significant proportion of the total cost, especially for high-throughput applications such as single-cell RNA profiling. Here, we have applied novel technologies to scale down reaction volumes for library preparation. Our system consisted of in vitro differentiated human embryonic stem cells representing two stages of pancreatic differentiation, for which we prepared multiple biological and technical replicates. We used the Fluidigm (San Francisco, CA) C1 single-cell Autoprep System for single-cell complementary DNA (cDNA) generation and an enzyme-based tagmentation system (Nextera XT; Illumina, San Diego, CA) with a nanoliter liquid handler (mosquito HTS; TTP Labtech, Royston, UK) for library preparation, reducing the reaction volume down to 2 µL and using as little as 20 pg of input cDNA. The resulting sequencing data were bioinformatically analyzed and correlated among the different library reaction volumes. Our results showed that decreasing the reaction volume did not interfere with the quality or the reproducibility of the sequencing data, and the transcriptional data from the scaled-down libraries allowed us to distinguish between single cells. Thus, we have developed a process to enable efficient and cost-effective high-throughput single-cell transcriptome sequencing. © 2016 Society for Laboratory Automation and Screening.

  3. A canonical state-space representation for SISO systems using multipoint Jordan CFE. [Continued-Fraction Expansion

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Guo, Tong-Yi; Shieh, Leang-San

    1991-01-01

    A canonical state-space realization based on the multipoint Jordan continued-fraction expansion (CFE) is presented for single-input-single-output (SISO) systems. The similarity transformation matrix which relates the new canonical form to the phase-variable canonical form is also derived. The presented canonical state-space representation is particularly attractive for the application of SISO system theory in which a reduced-dimensional time-domain model is necessary.

  4. Retrieval-Induced Inhibition in Short-Term Memory.

    PubMed

    Kang, Min-Suk; Choi, Joongrul

    2015-07-01

    We used a visual illusion called motion repulsion as a model system for investigating competition between two mental representations. Subjects were asked to remember two random-dot-motion displays presented in sequence and then to report the motion directions for each. Remembered motion directions were shifted away from the actual motion directions, an effect similar to the motion repulsion observed during perception. More important, the item retrieved second showed greater repulsion than the item retrieved first. This suggests that earlier retrieval exerted greater inhibition on the other item being held in short-term memory. This retrieval-induced motion repulsion could be explained neither by reduced cognitive resources for maintaining short-term memory nor by continued inhibition between short-term memory representations. These results indicate that retrieval of memory representations inhibits other representations in short-term memory. We discuss mechanisms of retrieval-induced inhibition and their implications for the structure of memory. © The Author(s) 2015.

  5. How a tolerant past affects the present: historical tolerance and the acceptance of Muslim expressive rights.

    PubMed

    Smeekes, Anouk; Verkuyten, Maykel; Poppe, Edwin

    2012-11-01

    Three studies, conducted in The Netherlands, examined the relationship between a tolerant representation of national history and the acceptance of Muslim expressive rights. Following self-categorization theory, it was hypothesized that historical tolerance would be associated with greater acceptance of Muslim expressive rights, especially for natives who strongly identify with their national in-group. Furthermore, it was predicted that the positive effect of representations of historical tolerance on higher identifiers' acceptance could be explained by reduced perceptions of identity incompatibility. The results of Study 1 confirmed the first hypothesis, and the results of Study 2 and Study 3 supported the second hypothesis. These findings underline the importance of historical representations of the nation for understanding current reactions toward immigrants. Importantly, the results show that a tolerant representation of national history can elevate acceptance of immigrants, especially among natives who feel a relatively strong sense of belonging to their nation.

  6. Dynamic Behavior of Sand: Annual Report FY 11

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoun, T; Herbold, E; Johnson, S

    2012-03-15

    Currently, design of earth-penetrating munitions relies heavily on empirical relationships to estimate behavior, making it difficult to design novel munitions or address novel target situations without expensive and time-consuming full-scale testing with relevant system and target characteristics. Enhancing design through numerical studies and modeling could help reduce the extent and duration of full-scale testing if the models have enough fidelity to capture all of the relevant parameters. This can be separated into three distinct problems: that of the penetrator structural and component response, that of the target response, and that of the coupling between the two. This project focuses onmore » enhancing understanding of the target response, specifically granular geomaterials, where the temporal and spatial multi-scale nature of the material controls its response. As part of the overarching goal of developing computational capabilities to predict the performance of conventional earth-penetrating weapons, this project focuses specifically on developing new models and numerical capabilities for modeling sand response in ALE3D. There is general recognition that granular materials behave in a manner that defies conventional continuum approaches which rely on response locality and which degrade in the presence of strong response nonlinearities, localization, and phase gradients. There are many numerical tools available to address parts of the problem. However, to enhance modeling capability, this project is pursuing a bottom-up approach of building constitutive models from higher fidelity, smaller spatial scale simulations (rather than from macro-scale observations of physical behavior as is traditionally employed) that are being augmented to address the unique challenges of mesoscale modeling of dynamically loaded granular materials. Through understanding response and sensitivity at the grain-scale, it is expected that better reduced order representations of response can be formulated at the continuum scale as illustrated in Figure 1 and Figure 2. The final result of this project is to implement such reduced order models in the ALE3D material library for general use.« less

  7. Microbial dynamics in upflow anaerobic sludge blanket (UASB) bioreactor granules in response to short-term changes in substrate feed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovacik, William P.; Scholten, Johannes C.; Culley, David E.

    2010-08-01

    The complexity and diversity of the microbial communities in biogranules from an upflow anaerobic sludge blanket (UASB) bioreactor were determined in response to short-term changes in substrate feeds. The reactor was fed simulated brewery wastewater (SBWW) (70% ethanol, 15% acetate, 15% propionate) for 1.5 months (phase 1), acetate / sulfate for 2 months (phase 2), acetate-alone for 3 months (phase 3), and then a return to SBWW for 2 months (phase 4). Performance of the reactor remained relatively stable throughout the experiment as shown by COD removal and gas production. 16S rDNA, methanogen-associated mcrA and sulfate reducer-associated dsrAB genes weremore » PCR amplified, then cloned and sequenced. Sequence analysis of 16S clone libraries showed a relatively simple community composed mainly of the methanogenic Archaea (Methanobacterium and Methanosaeta), members of the Green Non-Sulfur (Chloroflexi) group of Bacteria, followed by fewer numbers of Syntrophobacter, Spirochaeta, Acidobacteria and Cytophaga-related Bacterial sequences. Methanogen-related mcrA clone libraries were dominated throughout by Methanobacter and Methanospirillum related sequences. Although not numerous enough to be detected in our 16S rDNA libraries, sulfate reducers were detected in dsrAB clone libraries, with sequences related to Desulfovibrio and Desulfomonile. Community diversity levels (Shannon-Weiner index) generally decreased for all libraries in response to a change from SBWW to acetate-alone feed. But there was a large transitory increase noted in 16S diversity at the two-month sampling on acetate-alone, entirely related to an increase in Bacterial diversity. Upon return to SBWW conditions in phase 4, all diversity measures returned to near phase 1 levels.« less

  8. DNA-Compatible Nitro Reduction and Synthesis of Benzimidazoles.

    PubMed

    Du, Huang-Chi; Huang, Hongbing

    2017-10-18

    DNA-encoded chemical libraries have emerged as a cost-effective alternative to high-throughput screening (HTS) for hit identification in drug discovery. A key factor for productive DNA-encoded libraries is the chemical diversity of the small molecule moiety attached to an encoding DNA oligomer. The library structure diversity is often limited to DNA-compatible chemical reactions in aqueous media. Herein, we describe a facile process for reducing aryl nitro groups to aryl amines. The new protocol offers simple operation and circumvents the pyrophoric potential of the conventional method (Raney nickel). The reaction is performed in aqueous solution and does not compromise DNA structural integrity. The utility of this method is demonstrated by the versatile synthesis of benzimidazoles on DNA.

  9. The effect of U.S. policies on the economics of libraries.

    PubMed Central

    Cummings, M M

    1985-01-01

    The decline in federal support of educational programs has made it difficult for libraries to apply new technologies to improve practices and services. While federal support has declined in constant dollars, there has been a modest increase in grants from private foundations. Current U.S. policies require federal agencies to recover full costs of rendering services (Circular A-25) and require the transfer of many federal service-oriented activities to the commercial sector (Circular A-76). Additionally, the Paperwork Reduction Act of 1980 is inhibiting the production and dissemination of federal publications. Government pursuit of these policies adds a heavy economic burden to libraries and threatens to reduce access to the scholarly and scientific record. PMID:3978292

  10. Development of common neural representations for distinct numerical problems

    PubMed Central

    Chang, Ting-Ting; Rosenberg-Lee, Miriam; Metcalfe, Arron W. S.; Chen, Tianwen; Menon, Vinod

    2015-01-01

    How the brain develops representations for abstract cognitive problems is a major unaddressed question in neuroscience. Here we tackle this fundamental question using arithmetic problem solving, a cognitive domain important for the development of mathematical reasoning. We first examined whether adults demonstrate common neural representations for addition and subtraction problems, two complementary arithmetic operations that manipulate the same quantities. We then examined how the common neural representations for the two problem types change with development. Whole-brain multivoxel representational similarity (MRS) analysis was conducted to examine common coding of addition and subtraction problems in children and adults. We found that adults exhibited significant levels of MRS between the two problem types, not only in the intra-parietal sulcus (IPS) region of the posterior parietal cortex (PPC), but also in ventral temporal-occipital, anterior temporal and dorsolateral prefrontal cortices. Relative to adults, children showed significantly reduced levels of MRS in these same regions. In contrast, no brain areas showed significantly greater MRS between problem types in children. Our findings provide novel evidence that the emergence of arithmetic problem solving skills from childhood to adulthood is characterized by maturation of common neural representations between distinct numerical operations, and involve distributed brain regions important for representing and manipulating numerical quantity. More broadly, our findings demonstrate that representational analysis provides a powerful approach for uncovering fundamental mechanisms by which children develop proficiencies that are a hallmark of human cognition. PMID:26160287

  11. Changes in Self-Representations Following Psychoanalytic Psychotherapy for Young Adults: A Comparative Typology.

    PubMed

    Werbart, Andrzej; Brusell, Lars; Iggedal, Rebecka; Lavfors, Kristin; Widholm, Alexander

    2016-10-01

    Changes in dynamic psychological structures are often a treatment goal in psychotherapy. The present study aimed at creating a typology of self-representations among young women and men in psychoanalytic psychotherapy, to study longitudinal changes in self-representations, and to compare self-representations in the clinical sample with those of a nonclinical group. Twenty-five women and sixteen men were interviewed according to Blatt's Object Relations Inventory pretreatment, at termination, and at a 1.5-year follow-up. In the comparison group, eleven women and nine men were interviewed at baseline, 1.5 years, and three years later. Typologies of the 123 self-descriptions in the clinical group and 60 in the nonclinical group were constructed by means of ideal-type analysis for men and women separately. Clusters of self-representations could be depicted on a two-dimensional matrix with the axes Relatedness-Self-definition and Integration-Nonintegration. In most cases, the self-descriptions changed over time in terms of belonging to different ideal-type clusters. In the clinical group, there was a movement toward increased integration in self-representations, but above all toward a better balance between relatedness and self-definition. The changes continued after termination, paralleled by reduced symptoms, improved functioning, and higher developmental levels of representations. No corresponding tendency could be observed in the nonclinical group.

  12. Cognitive representations of breast cancer, emotional distress and preventive health behaviour: a theoretical perspective.

    PubMed

    Decruyenaere, M; Evers-Kiebooms, G; Welkenhuysen, M; Denayer, L; Claes, E

    2000-01-01

    Individuals at high risk for developing breast and/or ovarian cancer are faced with difficult decisions regarding genetic testing, cancer prevention and/or intensive surveillance. Large interindividual differences exist in the uptake of these health-related services. This paper is aimed at understanding and predicting how people emotionally and behaviourally react to information concerning genetic predisposition to breast/ovarian cancer. For this purpose, the self-regulation model of illness representations is elaborated. This model suggests that health-related behaviour is influenced by a person's cognitive and emotional representation of the health threat. These representations generate coping behaviour aimed at resolving the objective health problems (problem-focussed coping) and at reducing the emotional distress induced by the health threat (emotion-focussed coping). Based on theoretical considerations and empirical studies, four interrelated attributes of the cognitive illness representation of hereditary breast/ovarian cancer are described: causal beliefs concerning the disease, perceived severity, perceived susceptibility to the disease and perceived controllability. The paper also addresses the complex interactions between these cognitive attributes, emotional distress and preventive health behaviour.

  13. User-based representation of time-resolved multimodal public transportation networks.

    PubMed

    Alessandretti, Laura; Karsai, Márton; Gauvin, Laetitia

    2016-07-01

    Multimodal transportation systems, with several coexisting services like bus, tram and metro, can be represented as time-resolved multilayer networks where the different transportation modes connecting the same set of nodes are associated with distinct network layers. Their quantitative description became possible recently due to openly accessible datasets describing the geo-localized transportation dynamics of large urban areas. Advancements call for novel analytics, which combines earlier established methods and exploits the inherent complexity of the data. Here, we provide a novel user-based representation of public transportation systems, which combines representations, accounting for the presence of multiple lines and reducing the effect of spatial embeddedness, while considering the total travel time, its variability across the schedule, and taking into account the number of transfers necessary. After the adjustment of earlier techniques to the novel representation framework, we analyse the public transportation systems of several French municipal areas and identify hidden patterns of privileged connections. Furthermore, we study their efficiency as compared to the commuting flow. The proposed representation could help to enhance resilience of local transportation systems to provide better design policies for future developments.

  14. User-based representation of time-resolved multimodal public transportation networks

    PubMed Central

    Alessandretti, Laura; Gauvin, Laetitia

    2016-01-01

    Multimodal transportation systems, with several coexisting services like bus, tram and metro, can be represented as time-resolved multilayer networks where the different transportation modes connecting the same set of nodes are associated with distinct network layers. Their quantitative description became possible recently due to openly accessible datasets describing the geo-localized transportation dynamics of large urban areas. Advancements call for novel analytics, which combines earlier established methods and exploits the inherent complexity of the data. Here, we provide a novel user-based representation of public transportation systems, which combines representations, accounting for the presence of multiple lines and reducing the effect of spatial embeddedness, while considering the total travel time, its variability across the schedule, and taking into account the number of transfers necessary. After the adjustment of earlier techniques to the novel representation framework, we analyse the public transportation systems of several French municipal areas and identify hidden patterns of privileged connections. Furthermore, we study their efficiency as compared to the commuting flow. The proposed representation could help to enhance resilience of local transportation systems to provide better design policies for future developments. PMID:27493773

  15. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    PubMed Central

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  16. Social representations and normative beliefs of aging.

    PubMed

    Torres, Tatiana de Lucena; Camargo, Brigido Vizeu; Boulsfield, Andréa Barbará; Silva, Antônia Oliveira

    2015-12-01

    This study adopted the theory of social representations as a theoretical framework in order to characterize similarities and differences in social representations and normative beliefs of aging for different age groups. The 638 participants responded to self-administered questionnaire and were equally distributed by sex and age. The results show that aging is characterized by positive stereotypes (knowledge and experience); however, retirement is linked to aging, but in a negative way, particularly for men, involving illness, loneliness and disability. When age was considered, it was verified that the connections with the representational elements became more complex for older groups, showing social representation functionality, largely for the elderly. Adulthood seems to be preferred and old age is disliked. There were divergences related to the perception of the beginning of life phases, especially that of old age. Work was characterized as the opposite of aging, and it revealed the need for actions intended for the elderly and retired workers, with post-retirement projects. In addition, it suggests investment in public policies that encourage intergenerational contact, with efforts to reduce intolerance and discrimination based on age of people.

  17. Quantitation of next generation sequencing library preparation protocol efficiencies using droplet digital PCR assays - a systematic comparison of DNA library preparation kits for Illumina sequencing.

    PubMed

    Aigrain, Louise; Gu, Yong; Quail, Michael A

    2016-06-13

    The emergence of next-generation sequencing (NGS) technologies in the past decade has allowed the democratization of DNA sequencing both in terms of price per sequenced bases and ease to produce DNA libraries. When it comes to preparing DNA sequencing libraries for Illumina, the current market leader, a plethora of kits are available and it can be difficult for the users to determine which kit is the most appropriate and efficient for their applications; the main concerns being not only cost but also minimal bias, yield and time efficiency. We compared 9 commercially available library preparation kits in a systematic manner using the same DNA sample by probing the amount of DNA remaining after each protocol steps using a new droplet digital PCR (ddPCR) assay. This method allows the precise quantification of fragments bearing either adaptors or P5/P7 sequences on both ends just after ligation or PCR enrichment. We also investigated the potential influence of DNA input and DNA fragment size on the final library preparation efficiency. The overall library preparations efficiencies of the libraries show important variations between the different kits with the ones combining several steps into a single one exhibiting some final yields 4 to 7 times higher than the other kits. Detailed ddPCR data also reveal that the adaptor ligation yield itself varies by more than a factor of 10 between kits, certain ligation efficiencies being so low that it could impair the original library complexity and impoverish the sequencing results. When a PCR enrichment step is necessary, lower adaptor-ligated DNA inputs leads to greater amplification yields, hiding the latent disparity between kits. We describe a ddPCR assay that allows us to probe the efficiency of the most critical step in the library preparation, ligation, and to draw conclusion on which kits is more likely to preserve the sample heterogeneity and reduce the need of amplification.

  18. Modeling vocalization with ECoG cortical activity recorded during vocal production in the macaque monkey.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Fujii, Naotaka; Averbeck, Bruno B; Mishkin, Mortimer

    2014-01-01

    Vocal production is an example of controlled motor behavior with high temporal precision. Previous studies have decoded auditory evoked cortical activity while monkeys listened to vocalization sounds. On the other hand, there have been few attempts at decoding motor cortical activity during vocal production. Here we recorded cortical activity during vocal production in the macaque with a chronically implanted electrocorticographic (ECoG) electrode array. The array detected robust activity in motor cortex during vocal production. We used a nonlinear dynamical model of the vocal organ to reduce the dimensionality of `Coo' calls produced by the monkey. We then used linear regression to evaluate the information in motor cortical activity for this reduced representation of calls. This simple linear model accounted for circa 65% of the variance in the reduced sound representations, supporting the feasibility of using the dynamical model of the vocal organ for decoding motor cortical activity during vocal production.

  19. Neural priming in human frontal cortex: multiple forms of learning reduce demands on the prefrontal executive system.

    PubMed

    Race, Elizabeth A; Shanker, Shanti; Wagner, Anthony D

    2009-09-01

    Past experience is hypothesized to reduce computational demands in PFC by providing bottom-up predictive information that informs subsequent stimulus-action mapping. The present fMRI study measured cortical activity reductions ("neural priming"/"repetition suppression") during repeated stimulus classification to investigate the mechanisms through which learning from the past decreases demands on the prefrontal executive system. Manipulation of learning at three levels of representation-stimulus, decision, and response-revealed dissociable neural priming effects in distinct frontotemporal regions, supporting a multiprocess model of neural priming. Critically, three distinct patterns of neural priming were identified in lateral frontal cortex, indicating that frontal computational demands are reduced by three forms of learning: (a) cortical tuning of stimulus-specific representations, (b) retrieval of learned stimulus-decision mappings, and (c) retrieval of learned stimulus-response mappings. The topographic distribution of these neural priming effects suggests a rostrocaudal organization of executive function in lateral frontal cortex.

  20. Clinical and academic use of electronic and print books: the Health Sciences Library System e-book study at the University of Pittsburgh.

    PubMed

    Folb, Barbara L; Wessel, Charles B; Czechowski, Leslie J

    2011-07-01

    The purpose of the Health Sciences Library System (HSLS) electronic book (e-book) study was to assess use, and factors affecting use, of e-books by all patron groups of an academic health sciences library serving both university and health system-affiliated patrons. A web-based survey was distributed to a random sample (n=5,292) of holders of library remote access passwords. A total of 871 completed and 108 partially completed surveys were received, for an approximate response rate of 16.5%-18.5%, with all user groups represented. Descriptive and chi-square analysis was done using SPSS 17. Library e-books were used by 55.4% of respondents. Use by role varied: 21.3% of faculty reported having assigned all or part of an e-book for class readings, while 86% of interns, residents, and fellows reported using an e-book to support clinical care. Respondents preferred print for textbooks and manuals and electronic format for research protocols, pharmaceutical, and reference books, but indicated high flexibility about format choice. They rated printing and saving e-book content as more important than annotation, highlighting, and bookmarking features. Respondents' willingness to use alternate formats, if convenient, suggests that libraries can selectively reduce title duplication between print and e-books and still support library user information needs, especially if publishers provide features that users want. Marketing and user education may increase use of e-book collections.

Top