Methods, Tools and Current Perspectives in Proteogenomics *
Ruggles, Kelly V.; Krug, Karsten; Wang, Xiaojing; Clauser, Karl R.; Wang, Jing; Payne, Samuel H.; Fenyö, David; Zhang, Bing; Mani, D. R.
2017-01-01
With combined technological advancements in high-throughput next-generation sequencing and deep mass spectrometry-based proteomics, proteogenomics, i.e. the integrative analysis of proteomic and genomic data, has emerged as a new research field. Early efforts in the field were focused on improving protein identification using sample-specific genomic and transcriptomic sequencing data. More recently, integrative analysis of quantitative measurements from genomic and proteomic studies have identified novel insights into gene expression regulation, cell signaling, and disease. Many methods and tools have been developed or adapted to enable an array of integrative proteogenomic approaches and in this article, we systematically classify published methods and tools into four major categories, (1) Sequence-centric proteogenomics; (2) Analysis of proteogenomic relationships; (3) Integrative modeling of proteogenomic data; and (4) Data sharing and visualization. We provide a comprehensive review of methods and available tools in each category and highlight their typical applications. PMID:28456751
Bertaccini, Diego; Vaca, Sebastian; Carapito, Christine; Arsène-Ploetze, Florence; Van Dorsselaer, Alain; Schaeffer-Reiss, Christine
2013-06-07
In silico gene prediction has proven to be prone to errors, especially regarding precise localization of start codons that spread in subsequent biological studies. Therefore, the high throughput characterization of protein N-termini is becoming an emerging challenge in the proteomics and especially proteogenomics fields. The trimethoxyphenyl phosphonium (TMPP) labeling approach (N-TOP) is an efficient N-terminomic approach that allows the characterization of both N-terminal and internal peptides in a single experiment. Due to its permanent positive charge, TMPP labeling strongly affects MS/MS fragmentation resulting in unadapted scoring of TMPP-derivatized peptide spectra by classical search engines. This behavior has led to difficulties in validating TMPP-derivatized peptide identifications with usual score filtering and thus to low/underestimated numbers of identified N-termini. We present herein a new strategy (dN-TOP) that overwhelmed the previous limitation allowing a confident and automated N-terminal peptide validation thanks to a combined labeling with light and heavy TMPP reagents. We show how this double labeling allows increasing the number of validated N-terminal peptides. This strategy represents a considerable improvement to the well-established N-TOP method with an enhanced and accelerated data processing making it now fully compatible with high-throughput proteogenomics studies.
Zhu, Xun; Xie, Shangbo; Armengaud, Jean; Xie, Wen; Guo, Zhaojiang; Kang, Shi; Wu, Qingjun; Wang, Shaoli; Xia, Jixing; He, Rongjun; Zhang, Youjun
2016-01-01
The diamondback moth, Plutella xylostella (L.), is the major cosmopolitan pest of brassica and other cruciferous crops. Its larval midgut is a dynamic tissue that interfaces with a wide variety of toxicological and physiological processes. The draft sequence of the P. xylostella genome was recently released, but its annotation remains challenging because of the low sequence coverage of this branch of life and the poor description of exon/intron splicing rules for these insects. Peptide sequencing by computational assignment of tandem mass spectra to genome sequence information provides an experimental independent approach for confirming or refuting protein predictions, a concept that has been termed proteogenomics. In this study, we carried out an in-depth proteogenomic analysis to complement genome annotation of P. xylostella larval midgut based on shotgun HPLC-ESI-MS/MS data by means of a multialgorithm pipeline. A total of 876,341 tandem mass spectra were searched against the predicted P. xylostella protein sequences and a whole-genome six-frame translation database. Based on a data set comprising 2694 novel genome search specific peptides, we discovered 439 novel protein-coding genes and corrected 128 existing gene models. To get the most accurate data to seed further insect genome annotation, more than half of the novel protein-coding genes, i.e. 235 over 439, were further validated after RT-PCR amplification and sequencing of the corresponding transcripts. Furthermore, we validated 53 novel alternative splicings. Finally, a total of 6764 proteins were identified, resulting in one of the most comprehensive proteogenomic study of a nonmodel animal. As the first tissue-specific proteogenomics analysis of P. xylostella, this study provides the fundamental basis for high-throughput proteomics and functional genomics approaches aimed at deciphering the molecular mechanisms of resistance and controlling this pest. PMID:26902207
Proteogenomic insights into uranium tolerance of a Chernobyl's Microbacterium bacterial isolate.
Gallois, Nicolas; Alpha-Bazin, Béatrice; Ortet, Philippe; Barakat, Mohamed; Piette, Laurie; Long, Justine; Berthomieu, Catherine; Armengaud, Jean; Chapon, Virginie
2018-04-15
Microbacterium oleivorans A9 is a uranium-tolerant actinobacteria isolated from the trench T22 located near the Chernobyl nuclear power plant. This site is contaminated with different radionuclides including uranium. To observe the molecular changes at the proteome level occurring in this strain upon uranyl exposure and understand molecular mechanisms explaining its uranium tolerance, we established its draft genome and used this raw information to perform an in-depth proteogenomics study. High-throughput proteomics were performed on cells exposed or not to 10μM uranyl nitrate sampled at three previously identified phases of uranyl tolerance. We experimentally detected and annotated 1532 proteins and highlighted a total of 591 proteins for which abundances were significantly differing between conditions. Notably, proteins involved in phosphate and iron metabolisms show high dynamics. A large ratio of proteins more abundant upon uranyl stress, are distant from functionally-annotated known proteins, highlighting the lack of fundamental knowledge regarding numerous key molecular players from soil bacteria. Microbacterium oleivorans A9 is an interesting environmental model to understand biological processes engaged in tolerance to radionuclides. Using an innovative proteogenomics approach, we explored its molecular mechanisms involved in uranium tolerance. We sequenced its genome, interpreted high-throughput proteomic data against a six-reading frame ORF database deduced from the draft genome, annotated the identified proteins and compared protein abundances from cells exposed or not to uranyl stress after a cascade search. These data show that a complex cellular response to uranium occurs in Microbacterium oleivorans A9, where one third of the experimental proteome is modified. In particular, the uranyl stress perturbed the phosphate and iron metabolic pathways. Furthermore, several transporters have been identified to be specifically associated to uranyl stress, paving the way to the development of biotechnological tools for uranium decontamination. Copyright © 2017. Published by Elsevier B.V.
Zhu, Xun; Xie, Shangbo; Armengaud, Jean; Xie, Wen; Guo, Zhaojiang; Kang, Shi; Wu, Qingjun; Wang, Shaoli; Xia, Jixing; He, Rongjun; Zhang, Youjun
2016-06-01
The diamondback moth, Plutella xylostella (L.), is the major cosmopolitan pest of brassica and other cruciferous crops. Its larval midgut is a dynamic tissue that interfaces with a wide variety of toxicological and physiological processes. The draft sequence of the P. xylostella genome was recently released, but its annotation remains challenging because of the low sequence coverage of this branch of life and the poor description of exon/intron splicing rules for these insects. Peptide sequencing by computational assignment of tandem mass spectra to genome sequence information provides an experimental independent approach for confirming or refuting protein predictions, a concept that has been termed proteogenomics. In this study, we carried out an in-depth proteogenomic analysis to complement genome annotation of P. xylostella larval midgut based on shotgun HPLC-ESI-MS/MS data by means of a multialgorithm pipeline. A total of 876,341 tandem mass spectra were searched against the predicted P. xylostella protein sequences and a whole-genome six-frame translation database. Based on a data set comprising 2694 novel genome search specific peptides, we discovered 439 novel protein-coding genes and corrected 128 existing gene models. To get the most accurate data to seed further insect genome annotation, more than half of the novel protein-coding genes, i.e. 235 over 439, were further validated after RT-PCR amplification and sequencing of the corresponding transcripts. Furthermore, we validated 53 novel alternative splicings. Finally, a total of 6764 proteins were identified, resulting in one of the most comprehensive proteogenomic study of a nonmodel animal. As the first tissue-specific proteogenomics analysis of P. xylostella, this study provides the fundamental basis for high-throughput proteomics and functional genomics approaches aimed at deciphering the molecular mechanisms of resistance and controlling this pest. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Ultra-Structure database design methodology for managing systems biology data and analyses
Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C
2009-01-01
Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849
Li, Honglan; Joh, Yoon Sung; Kim, Hyunwoo; Paek, Eunok; Lee, Sang-Won; Hwang, Kyu-Baek
2016-12-22
Proteogenomics is a promising approach for various tasks ranging from gene annotation to cancer research. Databases for proteogenomic searches are often constructed by adding peptide sequences inferred from genomic or transcriptomic evidence to reference protein sequences. Such inflation of databases has potential of identifying novel peptides. However, it also raises concerns on sensitive and reliable peptide identification. Spurious peptides included in target databases may result in underestimated false discovery rate (FDR). On the other hand, inflation of decoy databases could decrease the sensitivity of peptide identification due to the increased number of high-scoring random hits. Although several studies have addressed these issues, widely applicable guidelines for sensitive and reliable proteogenomic search have hardly been available. To systematically evaluate the effect of database inflation in proteogenomic searches, we constructed a variety of real and simulated proteogenomic databases for yeast and human tandem mass spectrometry (MS/MS) data, respectively. Against these databases, we tested two popular database search tools with various approaches to search result validation: the target-decoy search strategy (with and without a refined scoring-metric) and a mixture model-based method. The effect of separate filtering of known and novel peptides was also examined. The results from real and simulated proteogenomic searches confirmed that separate filtering increases the sensitivity and reliability in proteogenomic search. However, no one method consistently identified the largest (or the smallest) number of novel peptides from real proteogenomic searches. We propose to use a set of search result validation methods with separate filtering, for sensitive and reliable identification of peptides in proteogenomic search.
Hernandez-Valladares, Maria; Vaudel, Marc; Selheim, Frode; Berven, Frode; Bruserud, Øystein
2017-08-01
Mass spectrometry (MS)-based proteomics has become an indispensable tool for the characterization of the proteome and its post-translational modifications (PTM). In addition to standard protein sequence databases, proteogenomics strategies search the spectral data against the theoretical spectra obtained from customized protein sequence databases. Up to date, there are no published proteogenomics studies on acute myeloid leukemia (AML) samples. Areas covered: Proteogenomics involves the understanding of genomic and proteomic data. The intersection of both datatypes requires advanced bioinformatics skills. A standard proteogenomics workflow that could be used for the study of AML samples is described. The generation of customized protein sequence databases as well as bioinformatics tools and pipelines commonly used in proteogenomics are discussed in detail. Expert commentary: Drawing on evidence from recent cancer proteogenomics studies and taking into account the public availability of AML genomic data, the interpretation of present and future MS-based AML proteomic data using AML-specific protein sequence databases could discover new biological mechanisms and targets in AML. However, proteogenomics workflows including bioinformatics guidelines can be challenging for the wide AML research community. It is expected that further automation and simplification of the bioinformatics procedures might attract AML investigators to adopt the proteogenomics strategy.
International Cancer Proteogenome Consortium | Office of Cancer Clinical Proteomics Research
The International Cancer Proteogenome Consortium (ICPC), is a voluntary scientific organization that provides a forum for collaboration among some of the world's leading cancer and proteogenomic research centers.
Shukla, Hem D
2017-10-25
During the past century, our understanding of cancer diagnosis and treatment has been based on a monogenic approach, and as a consequence our knowledge of the clinical genetic underpinnings of cancer is incomplete. Since the completion of the human genome in 2003, it has steered us into therapeutic target discovery, enabling us to mine the genome using cutting edge proteogenomics tools. A number of novel and promising cancer targets have emerged from the genome project for diagnostics, therapeutics, and prognostic markers, which are being used to monitor response to cancer treatment. The heterogeneous nature of cancer has hindered progress in understanding the underlying mechanisms that lead to abnormal cellular growth. Since, the start of The Cancer Genome Atlas (TCGA), and the International Genome consortium projects, there has been tremendous progress in genome sequencing and immense numbers of cancer genomes have been completed, and this approach has transformed our understanding of the diagnosis and treatment of different types of cancers. By employing Genomics and proteomics technologies, an immense amount of genomic data is being generated on clinical tumors, which has transformed the cancer landscape and has the potential to transform cancer diagnosis and prognosis. A complete molecular view of the cancer landscape is necessary for understanding the underlying mechanisms of cancer initiation to improve diagnosis and prognosis, which ultimately will lead to personalized treatment. Interestingly, cancer proteome analysis has also allowed us to identify biomarkers to monitor drug and radiation resistance in patients undergoing cancer treatment. Further, TCGA-funded studies have allowed for the genomic and transcriptomic characterization of targeted cancers, this analysis aiding the development of targeted therapies for highly lethal malignancy. High-throughput technologies, such as complete proteome, epigenome, protein-protein interaction, and pharmacogenomics data, are indispensable to glean into the cancer genome and proteome and these approaches have generated multidimensional universal studies of genes and proteins (OMICS) data which has the potential to facilitate precision medicine. However, due to slow progress in computational technologies, the translation of big omics data into their clinical aspects have been slow. In this review, attempts have been made to describe the role of high-throughput genomic and proteomic technologies in identifying a panel of biomarkers which could be used for the early diagnosis and prognosis of cancer.
Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J
2013-04-01
Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NCI and FDA to Study Cancer Proteogenomics Together | Office of Cancer Clinical Proteomics Research
The National Cancer Institute (NCI) Office of Cancer Clinical Proteomics Research (OCCPR), part of the National Institutes of Health, and the U.S. Food and Drug Administration (FDA) has signed a Memorandum of Understanding (MOU) in proteogenomic regulatory science. This will allow the agencies to share information that will accelerate the development of proteogenomic technologies and biomarkers, as it relates to precision medicine in cancer.
The National Cancer Institute’s (NCI) Office of Cancer Clinical Proteomics Research, part of the National Institutes of Health, along with the Indian Institute of Technology Bombay (IITB) and Tata Memorial Centre (TMC) have signed a Memorandum of Understanding (MOU) on clinical proteogenomics cancer research. The MOU between NCI, IITB, and Tata Memorial Centre represents the thirtieth and thirty-first institutions and the twelfth country to join the International Cancer Proteogenome Consortium (ICPC). The purpose of the MOU is to facilitate scientific and programmatic collaborations between NCI, IITB, and TMC in basic and clinical proteogenomic studies leading to patient care and public dissemination and information sharing to the research community.
Fu, Shuyue; Liu, Xiang; Luo, Maochao; Xie, Ke; Nice, Edouard C; Zhang, Haiyuan; Huang, Canhua
2017-04-01
Chemoresistance is a major obstacle for current cancer treatment. Proteogenomics is a powerful multi-omics research field that uses customized protein sequence databases generated by genomic and transcriptomic information to identify novel genes (e.g. noncoding, mutation and fusion genes) from mass spectrometry-based proteomic data. By identifying aberrations that are differentially expressed between tumor and normal pairs, this approach can also be applied to validate protein variants in cancer, which may reveal the response to drug treatment. Areas covered: In this review, we will present recent advances in proteogenomic investigations of cancer drug resistance with an emphasis on integrative proteogenomic pipelines and the biomarker discovery which contributes to achieving the goal of using precision/personalized medicine for cancer treatment. Expert commentary: The discovery and comprehensive understanding of potential biomarkers help identify the cohort of patients who may benefit from particular treatments, and will assist real-time clinical decision-making to maximize therapeutic efficacy and minimize adverse effects. With the development of MS-based proteomics and NGS-based sequencing, a growing number of proteogenomic tools are being developed specifically to investigate cancer drug resistance.
Jimenez, Connie R; Verheul, Henk M W
2014-01-01
Proteomics is optimally suited to bridge the gap between genomic information on the one hand and biologic functions and disease phenotypes at the other, since it studies the expression and/or post-translational modification (especially phosphorylation) of proteins--the major cellular players bringing about cellular functions--at a global level in biologic specimens. Mass spectrometry technology and (bio)informatic tools have matured to the extent that they can provide high-throughput, comprehensive, and quantitative protein inventories of cells, tissues, and biofluids in clinical samples at low level. In this article, we focus on next-generation proteomics employing nanoliquid chromatography coupled to high-resolution tandem mass spectrometry for in-depth (phospho)protein profiling of tumor tissues and (proximal) biofluids, with a focus on studies employing clinical material. In addition, we highlight emerging proteogenomic approaches for the identification of tumor-specific protein variants, and targeted multiplex mass spectrometry strategies for large-scale biomarker validation. Below we provide a discussion of recent progress, some research highlights, and challenges that remain for clinical translation of proteomic discoveries.
CPTAC Teams | Office of Cancer Clinical Proteomics Research
The following are the current CPTAC teams, representing a network of Proteome Characterization Centers (PCCs), Proteogenomic Translational Research Centers (PTRCs), and Proteogenomic Data Analysis Centers (PGDACs). Teams are listed alphabetically by institution, with their respective Principal Investigators:
Ndah, Elvis; Jonckheere, Veronique
2017-01-01
Proteogenomics is an emerging research field yet lacking a uniform method of analysis. Proteogenomic studies in which N-terminal proteomics and ribosome profiling are combined, suggest that a high number of protein start sites are currently missing in genome annotations. We constructed a proteogenomic pipeline specific for the analysis of N-terminal proteomics data, with the aim of discovering novel translational start sites outside annotated protein coding regions. In summary, unidentified MS/MS spectra were matched to a specific N-terminal peptide library encompassing protein N termini encoded in the Arabidopsis thaliana genome. After a stringent false discovery rate filtering, 117 protein N termini compliant with N-terminal methionine excision specificity and indicative of translation initiation were found. These include N-terminal protein extensions and translation from transposable elements and pseudogenes. Gene prediction provided supporting protein-coding models for approximately half of the protein N termini. Besides the prediction of functional domains (partially) contained within the newly predicted ORFs, further supporting evidence of translation was found in the recently released Araport11 genome re-annotation of Arabidopsis and computational translations of sequences stored in public repositories. Most interestingly, complementary evidence by ribosome profiling was found for 23 protein N termini. Finally, by analyzing protein N-terminal peptides, an in silico analysis demonstrates the applicability of our N-terminal proteogenomics strategy in revealing protein-coding potential in species with well- and poorly-annotated genomes. PMID:28432195
Willems, Patrick; Ndah, Elvis; Jonckheere, Veronique; Stael, Simon; Sticker, Adriaan; Martens, Lennart; Van Breusegem, Frank; Gevaert, Kris; Van Damme, Petra
2017-06-01
Proteogenomics is an emerging research field yet lacking a uniform method of analysis. Proteogenomic studies in which N-terminal proteomics and ribosome profiling are combined, suggest that a high number of protein start sites are currently missing in genome annotations. We constructed a proteogenomic pipeline specific for the analysis of N-terminal proteomics data, with the aim of discovering novel translational start sites outside annotated protein coding regions. In summary, unidentified MS/MS spectra were matched to a specific N-terminal peptide library encompassing protein N termini encoded in the Arabidopsis thaliana genome. After a stringent false discovery rate filtering, 117 protein N termini compliant with N-terminal methionine excision specificity and indicative of translation initiation were found. These include N-terminal protein extensions and translation from transposable elements and pseudogenes. Gene prediction provided supporting protein-coding models for approximately half of the protein N termini. Besides the prediction of functional domains (partially) contained within the newly predicted ORFs, further supporting evidence of translation was found in the recently released Araport11 genome re-annotation of Arabidopsis and computational translations of sequences stored in public repositories. Most interestingly, complementary evidence by ribosome profiling was found for 23 protein N termini. Finally, by analyzing protein N-terminal peptides, an in silico analysis demonstrates the applicability of our N-terminal proteogenomics strategy in revealing protein-coding potential in species with well- and poorly-annotated genomes. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
Fiore, L D; Rodriguez, H; Shriver, C D
2017-05-01
A tri-federal initiative arising out of the Cancer Moonshot has resulted in the formation of a program to utilize advanced genomic and proteomic expression platforms on high-quality human biospecimens in near-real-time in order to identify potentially actionable therapeutic molecular targets, study the relationship of molecular findings to cancer treatment outcomes, and accelerate novel clinical trials with biomarkers of prognostic and predictive value. © 2017 Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
The National Cancer Institute's (NCI) Office of Cancer Clinical Proteomics Research, part of the National Institutes of Health, and Korea University (KU) located in The Republic of Korea have signed a Memorandum of Understanding (MOU) in clinical proteogenomics cancer research. The MOU between NCI and KU represents the twenty-ninth institution and eleventh country to join the continued effort of the International Cancer Proteogenome Consortium (ICPC), an effort catalyzed through the vision of the 47th Vice President of the United States Joseph R. Biden, Jr. and the Cancer Moonshot.
Vergara, D; Tinelli, A; Martignago, R; Malvasi, A; Chiuri, V E; Leo, G
2010-02-01
Tumors of the epithelial surface account for about 80% of all ovarian neoplasms and exhibit a heterogeneous histological classification affecting survival. Tumors of low malignant potential, defined as borderline ovarian tumors(BOTs), have a markedly better survival and low recurrence, even if surgery still represents the common management for this type of cancer. It is still debated in the literature if BOTs can be considered as intermediate precursors in the progression to high grade ovarian tumors. Evidences now propose that high-grade serous carcinomas are not associated with a defined precursor lesion. Together with histopathological studies, mutations of KRAS, BRAF and p53 genes, microsatellite instability (MSI)and under- or over-expression of many genes and proteins have been used to address this question. Despite the large body of data summarized, a limited number of molecules proved to be useful in elucidating BOTs pathogenesis and only a few of these showed possible application in the therapy. We believe that high-throughput technologies would help to overcome these limitations offering the promise of a better understanding of BOTs classification. The aim is to guide the diagnosis and prognosis of BOTs to develop new possible therapeutic molecular targets avoiding surgery.
An Accessible Proteogenomics Informatics Resource for Cancer Researchers.
Chambers, Matthew C; Jagtap, Pratik D; Johnson, James E; McGowan, Thomas; Kumar, Praveen; Onsongo, Getiria; Guerrero, Candace R; Barsnes, Harald; Vaudel, Marc; Martens, Lennart; Grüning, Björn; Cooke, Ira R; Heydarian, Mohammad; Reddy, Karen L; Griffin, Timothy J
2017-11-01
Proteogenomics has emerged as a valuable approach in cancer research, which integrates genomic and transcriptomic data with mass spectrometry-based proteomics data to directly identify expressed, variant protein sequences that may have functional roles in cancer. This approach is computationally intensive, requiring integration of disparate software tools into sophisticated workflows, challenging its adoption by nonexpert, bench scientists. To address this need, we have developed an extensible, Galaxy-based resource aimed at providing more researchers access to, and training in, proteogenomic informatics. Our resource brings together software from several leading research groups to address two foundational aspects of proteogenomics: (i) generation of customized, annotated protein sequence databases from RNA-Seq data; and (ii) accurate matching of tandem mass spectrometry data to putative variants, followed by filtering to confirm their novelty. Directions for accessing software tools and workflows, along with instructional documentation, can be found at z.umn.edu/canresgithub. Cancer Res; 77(21); e43-46. ©2017 AACR . ©2017 American Association for Cancer Research.
NASA Astrophysics Data System (ADS)
Sheynkman, Gloria M.; Shortreed, Michael R.; Cesnik, Anthony J.; Smith, Lloyd M.
2016-06-01
Mass spectrometry-based proteomics has emerged as the leading method for detection, quantification, and characterization of proteins. Nearly all proteomic workflows rely on proteomic databases to identify peptides and proteins, but these databases typically contain a generic set of proteins that lack variations unique to a given sample, precluding their detection. Fortunately, proteogenomics enables the detection of such proteomic variations and can be defined, broadly, as the use of nucleotide sequences to generate candidate protein sequences for mass spectrometry database searching. Proteogenomics is experiencing heightened significance due to two developments: (a) advances in DNA sequencing technologies that have made complete sequencing of human genomes and transcriptomes routine, and (b) the unveiling of the tremendous complexity of the human proteome as expressed at the levels of genes, cells, tissues, individuals, and populations. We review here the field of human proteogenomics, with an emphasis on its history, current implementations, the types of proteomic variations it reveals, and several important applications.
Sheynkman, Gloria M.; Shortreed, Michael R.; Cesnik, Anthony J.; Smith, Lloyd M.
2016-01-01
Mass spectrometry–based proteomics has emerged as the leading method for detection, quantification, and characterization of proteins. Nearly all proteomic workflows rely on proteomic databases to identify peptides and proteins, but these databases typically contain a generic set of proteins that lack variations unique to a given sample, precluding their detection. Fortunately, proteogenomics enables the detection of such proteomic variations and can be defined, broadly, as the use of nucleotide sequences to generate candidate protein sequences for mass spectrometry database searching. Proteogenomics is experiencing heightened significance due to two developments: (a) advances in DNA sequencing technologies that have made complete sequencing of human genomes and transcriptomes routine, and (b) the unveiling of the tremendous complexity of the human proteome as expressed at the levels of genes, cells, tissues, individuals, and populations. We review here the field of human proteogenomics, with an emphasis on its history, current implementations, the types of proteomic variations it reveals, and several important applications. PMID:27049631
Proteogenomics | Office of Cancer Clinical Proteomics Research
Proteogenomics, or the integration of proteomics with genomics and transcriptomics, is an emerging approach that promises to advance basic, translational and clinical research. By combining genomic and proteomic information, leading scientists are gaining new insights due to a more complete and unified understanding of complex biological processes.
Proteogenomics, integration of proteomics, genomics, and transcriptomics, is an emerging approach that promises to advance basic, translational and clinical research. By combining genomic and proteomic information, leading scientists are gaining new insights due to a more complete and unified understanding of complex biological processes.
APOLLO Network | Office of Cancer Clinical Proteomics Research
The Applied Proteogenomics OrganizationaL Learning and Outcomes (APOLLO) network is a collaboration between NCI, the Department of Defense (DoD), and the Department of Veterans Affairs (VA) to incorporate proteogenomics into patient care as a way of looking beyond the genome, to the activity and expression of the proteins that the genome encodes.
Durighello, Emie; Christie-Oleza, Joseph Alexander; Armengaud, Jean
2014-01-01
Bacteria from the Roseobacter clade are abundant in surface marine ecosystems as over 10% of bacterial cells in the open ocean and 20% in coastal waters belong to this group. In order to document how these marine bacteria interact with their environment, we analyzed the exoproteome of Phaeobacter strain DSM 17395. We grew the strain in marine medium, collected the exoproteome and catalogued its content with high-throughput nanoLC-MS/MS shotgun proteomics. The major component represented 60% of the total protein content but was refractory to either classical proteomic identification or proteogenomics. We de novo sequenced this abundant protein with high-resolution tandem mass spectra which turned out being the 53 kDa RTX-toxin ZP_02147451. It comprised a peptidase M10 serralysin domain. We explained its recalcitrance to trypsin proteolysis and proteomic identification by its unusual low number of basic residues. We found this is a conserved trait in RTX-toxins from Roseobacter strains which probably explains their persistence in the harsh conditions around bacteria. Comprehensive analysis of exoproteomes from environmental bacteria should take into account this proteolytic recalcitrance. PMID:24586966
Durighello, Emie; Christie-Oleza, Joseph Alexander; Armengaud, Jean
2014-01-01
Bacteria from the Roseobacter clade are abundant in surface marine ecosystems as over 10% of bacterial cells in the open ocean and 20% in coastal waters belong to this group. In order to document how these marine bacteria interact with their environment, we analyzed the exoproteome of Phaeobacter strain DSM 17395. We grew the strain in marine medium, collected the exoproteome and catalogued its content with high-throughput nanoLC-MS/MS shotgun proteomics. The major component represented 60% of the total protein content but was refractory to either classical proteomic identification or proteogenomics. We de novo sequenced this abundant protein with high-resolution tandem mass spectra which turned out being the 53 kDa RTX-toxin ZP_02147451. It comprised a peptidase M10 serralysin domain. We explained its recalcitrance to trypsin proteolysis and proteomic identification by its unusual low number of basic residues. We found this is a conserved trait in RTX-toxins from Roseobacter strains which probably explains their persistence in the harsh conditions around bacteria. Comprehensive analysis of exoproteomes from environmental bacteria should take into account this proteolytic recalcitrance.
The National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce the opening of the leaderboard to its Proteogenomics Computational DREAM Challenge. The leadership board remains open for submissions during September 25, 2017 through October 8, 2017, with the Challenge expected to run until November 17, 2017.
Moonshot Objectives: Catalyze New Scientific Breakthroughs-Proteogenomics.
Rodland, Karin D; Piehowski, Paul; Smith, Richard D
Breaking down the silos between disciplines to accelerate the pace of cancer research is a key paradigm for the Cancer Moonshot. Molecular analyses of cancer biology have tended to segregate between a focus on nucleic acids-DNA, RNA, and their modifications-and a focus on proteins and protein function. Proteogenomics represents a fusion of those two approaches, leveraging the strengths of each to provide a more integrated vision of the flow of information from DNA to RNA to protein and eventually function at the molecular level. Proteogenomic studies have been incorporated into multiple activities associated with the Cancer Moonshot, demonstrating substantial added value. Innovative study designs integrating genomic, transcriptomic, and proteomic data, particularly those using clinically relevant samples and involving clinical trials, are poised to provide new insights regarding cancer risk, progression, and response to therapy.
A note on the false discovery rate of novel peptides in proteogenomics.
Zhang, Kun; Fu, Yan; Zeng, Wen-Feng; He, Kun; Chi, Hao; Liu, Chao; Li, Yan-Chang; Gao, Yuan; Xu, Ping; He, Si-Min
2015-10-15
Proteogenomics has been well accepted as a tool to discover novel genes. In most conventional proteogenomic studies, a global false discovery rate is used to filter out false positives for identifying credible novel peptides. However, it has been found that the actual level of false positives in novel peptides is often out of control and behaves differently for different genomes. To quantitatively model this problem, we theoretically analyze the subgroup false discovery rates of annotated and novel peptides. Our analysis shows that the annotation completeness ratio of a genome is the dominant factor influencing the subgroup FDR of novel peptides. Experimental results on two real datasets of Escherichia coli and Mycobacterium tuberculosis support our conjecture. yfu@amss.ac.cn or xupingghy@gmail.com or smhe@ict.ac.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Proteogenomic characterization of human colon and rectal cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Bing; Wang, Jing; Wang, Xiaojing
2014-09-18
We analyzed proteomes of colon and rectal tumors previously characterized by the Cancer Genome Atlas (TCGA) and performed integrated proteogenomic analyses. Protein sequence variants encoded by somatic genomic variations displayed reduced expression compared to protein variants encoded by germline variations. mRNA transcript abundance did not reliably predict protein expression differences between tumors. Proteomics identified five protein expression subtypes, two of which were associated with the TCGA "MSI/CIMP" transcriptional subtype, but had distinct mutation and methylation patterns and associated with different clinical outcomes. Although CNAs showed strong cis- and trans-effects on mRNA expression, relatively few of these extend to the proteinmore » level. Thus, proteomics data enabled prioritization of candidate driver genes. Our analyses identified HNF4A, a novel candidate driver gene in tumors with chromosome 20q amplifications. Integrated proteogenomic analysis provides functional context to interpret genomic abnormalities and affords novel insights into cancer biology.« less
The National Cancer Institute (NCI) Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce that teams led by Jaewoo Kang (Korea University), and Yuanfang Guan with Hongyang Li (University of Michigan) as the best performers of the NCI-CPTAC DREAM Proteogenomics Computational Challenge. Over 500 participants from 20 countries registered for the Challenge, which offered $25,000 in cash awards contributed by the NVIDIA Foundation through its Compute the Cure initiative.
This week, we are excited to announce the launch of the National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) Proteogenomics Computational DREAM Challenge. The aim of this Challenge is to encourage the generation of computational methods for extracting information from the cancer proteome and for linking those data to genomic and transcriptomic information. The specific goals are to predict proteomic and phosphoproteomic data from other multiple data types including transcriptomics and genetics.
The Clinical Proteomic Tumor Analysis Consortium (CPTAC) of the National Cancer Institute (NCI), part of the National Institutes of Health, announces the release of an educational video titled “Proteogenomics Research: On the Frontier of Precision Medicine." Launched at the HUPO2017 Global Leadership Gala Dinner, catalyzed in part by the Cancer Moonshot initiative and featuring as keynote speaker the 47th Vice President of the United States of America Joseph R.
July 11, 2016 — In the spirit of collaboration inspired by the Vice President’s Cancer Moonshot, the Department of Veterans Affairs (VA), the Department of Defense (DoD), and the National Cancer Institute (NCI) are proud to announce a new tri-agency coalition that will help cancer patients by enabling their oncologists to more rapidly and accurately identify effective drugs to treat cancer based on a patient’s unique proteogenomic profile.
Dr. Henry Rodriguez, director of the Office of Cancer Clinical Proteomics Research (OCCPR) at NCI, speaks with ecancer television at WIN 2017 about the translation of the proteins expressed in a patient's tumor into a map for druggable targets. By combining genomic and proteomic information (proteogenomics), leading scientists are gaining new insights into ways to detect and treat cancer due to a more complete and unified understanding of complex biological processes.
Proteogenomic characterization of human colon and rectal cancer
Zhang, Bing; Wang, Jing; Wang, Xiaojing; Zhu, Jing; Liu, Qi; Shi, Zhiao; Chambers, Matthew C.; Zimmerman, Lisa J.; Shaddox, Kent F.; Kim, Sangtae; Davies, Sherri R.; Wang, Sean; Wang, Pei; Kinsinger, Christopher R.; Rivers, Robert C.; Rodriguez, Henry; Townsend, R. Reid; Ellis, Matthew J.C.; Carr, Steven A.; Tabb, David L.; Coffey, Robert J.; Slebos, Robbert J.C.; Liebler, Daniel C.
2014-01-01
Summary We analyzed proteomes of colon and rectal tumors previously characterized by the Cancer Genome Atlas (TCGA) and performed integrated proteogenomic analyses. Somatic variants displayed reduced protein abundance compared to germline variants. mRNA transcript abundance did not reliably predict protein abundance differences between tumors. Proteomics identified five proteomic subtypes in the TCGA cohort, two of which overlapped with the TCGA “MSI/CIMP” transcriptomic subtype, but had distinct mutation, methylation, and protein expression patterns associated with different clinical outcomes. Although copy number alterations showed strong cis- and trans-effects on mRNA abundance, relatively few of these extend to the protein level. Thus, proteomics data enabled prioritization of candidate driver genes. The chromosome 20q amplicon was associated with the largest global changes at both mRNA and protein levels; proteomics data highlighted potential 20q candidates including HNF4A, TOMM34 and SRC. Integrated proteogenomic analysis provides functional context to interpret genomic abnormalities and affords a new paradigm for understanding cancer biology. PMID:25043054
Ruggles, Kelly V; Tang, Zuojian; Wang, Xuya; Grover, Himanshu; Askenazi, Manor; Teubl, Jennifer; Cao, Song; McLellan, Michael D; Clauser, Karl R; Tabb, David L; Mertins, Philipp; Slebos, Robbert; Erdmann-Gilmore, Petra; Li, Shunqiang; Gunawardena, Harsha P; Xie, Ling; Liu, Tao; Zhou, Jian-Ying; Sun, Shisheng; Hoadley, Katherine A; Perou, Charles M; Chen, Xian; Davies, Sherri R; Maher, Christopher A; Kinsinger, Christopher R; Rodland, Karen D; Zhang, Hui; Zhang, Zhen; Ding, Li; Townsend, R Reid; Rodriguez, Henry; Chan, Daniel; Smith, Richard D; Liebler, Daniel C; Carr, Steven A; Payne, Samuel; Ellis, Matthew J; Fenyő, David
2016-03-01
Improvements in mass spectrometry (MS)-based peptide sequencing provide a new opportunity to determine whether polymorphisms, mutations, and splice variants identified in cancer cells are translated. Herein, we apply a proteogenomic data integration tool (QUILTS) to illustrate protein variant discovery using whole genome, whole transcriptome, and global proteome datasets generated from a pair of luminal and basal-like breast-cancer-patient-derived xenografts (PDX). The sensitivity of proteogenomic analysis for singe nucleotide variant (SNV) expression and novel splice junction (NSJ) detection was probed using multiple MS/MS sample process replicates defined here as an independent tandem MS experiment using identical sample material. Despite analysis of over 30 sample process replicates, only about 10% of SNVs (somatic and germline) detected by both DNA and RNA sequencing were observed as peptides. An even smaller proportion of peptides corresponding to NSJ observed by RNA sequencing were detected (<0.1%). Peptides mapping to DNA-detected SNVs without a detectable mRNA transcript were also observed, suggesting that transcriptome coverage was incomplete (∼80%). In contrast to germline variants, somatic variants were less likely to be detected at the peptide level in the basal-like tumor than in the luminal tumor, raising the possibility of differential translation or protein degradation effects. In conclusion, this large-scale proteogenomic integration allowed us to determine the degree to which mutations are translated and identify gaps in sequence coverage, thereby benchmarking current technology and progress toward whole cancer proteome and transcriptome analysis. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Park, Jun Won; Um, Hyejin; Yang, Hanna; Ko, Woori; Kim, Dae-Yong; Kim, Hark Kyun
2017-03-11
To elucidate signaling pathways that regulate gastric cancer stem cell (CSC) phenotypes and immune checkpoint, we performed a proteogenomic analysis of NCC-S1M, which is a gastric cancer cell line with CSC-like characteristics and is the only syngeneic gastric tumor cell line transplant model created in the scientific community. We found that the NCC-S1M allograft was responsive to anti-PD-1 treatment, and overexpressed Cd274 encoding PD-L1. PD-L1 was transcriptionally activated by loss of the TGF-β signaling. Il1rl1 protein was overexpressed in NCC-S1M cells compared with NCC-S1 cells that are less tumorigenic and less chemoresistant. Il1rl1 knockdown in NCC-S1M cells reduced tumorigenic potential and in vivo chemoresistance. Our proteogenomic analysis demonstrates a role of Smad4 loss in the PD-L1 immune evasion, as well as Il1rl1's role in CSC-like properties of NCC-S1M. Copyright © 2017 Elsevier Inc. All rights reserved.
Precision medicine is an approach that allows doctors to understand how a patient's genetic profile may cause cancer to grow and spread, leading to a more personalized treatment strategy based on molecular characterization of a person's tumor. However, precision medicine as a genomics-based approach does not yet apply to all patients because genetic mutations do not always lead to changes of the corresponding proteins. Therefore, integrating genomics and proteomics data, or proteogenomics, presents as a new approach that may help make precision medicine a more effective treatment option for patients.
The National Cancer Institute will hold a public pre-application webinar on Friday, December 11 at 12:00 p.m. (EST) for the Funding Opportunity Announcements (FOAs) RFA-CA-15-021 entitled “Proteome Characterization Centers for Clinical Proteomic Tumor Analysis Consortium (U24), RFA-CA-15-022 entitled “Proteogenomic Translational Research Centers for Clinical Proteomic Tumor Analysis Consortium (U01)”, and RFA-CA-15-023 entitled “Proteogenomic Data Analysis Centers for Clinical Proteomic Tumor Analysis Consortium (U24)”.
Proteogenomic database construction driven from large scale RNA-seq data.
Woo, Sunghee; Cha, Seong Won; Merrihew, Gennifer; He, Yupeng; Castellana, Natalie; Guest, Clark; MacCoss, Michael; Bafna, Vineet
2014-01-03
The advent of inexpensive RNA-seq technologies and other deep sequencing technologies for RNA has the promise to radically improve genomic annotation, providing information on transcribed regions and splicing events in a variety of cellular conditions. Using MS-based proteogenomics, many of these events can be confirmed directly at the protein level. However, the integration of large amounts of redundant RNA-seq data and mass spectrometry data poses a challenging problem. Our paper addresses this by construction of a compact database that contains all useful information expressed in RNA-seq reads. Applying our method to cumulative C. elegans data reduced 496.2 GB of aligned RNA-seq SAM files to 410 MB of splice graph database written in FASTA format. This corresponds to 1000× compression of data size, without loss of sensitivity. We performed a proteogenomics study using the custom data set, using a completely automated pipeline, and identified a total of 4044 novel events, including 215 novel genes, 808 novel exons, 12 alternative splicings, 618 gene-boundary corrections, 245 exon-boundary changes, 938 frame shifts, 1166 reverse strands, and 42 translated UTRs. Our results highlight the usefulness of transcript + proteomic integration for improved genome annotations.
Proteogenomics connects somatic mutations to signalling in breast cancer.
Mertins, Philipp; Mani, D R; Ruggles, Kelly V; Gillette, Michael A; Clauser, Karl R; Wang, Pei; Wang, Xianlong; Qiao, Jana W; Cao, Song; Petralia, Francesca; Kawaler, Emily; Mundt, Filip; Krug, Karsten; Tu, Zhidong; Lei, Jonathan T; Gatza, Michael L; Wilkerson, Matthew; Perou, Charles M; Yellapantula, Venkata; Huang, Kuan-lin; Lin, Chenwei; McLellan, Michael D; Yan, Ping; Davies, Sherri R; Townsend, R Reid; Skates, Steven J; Wang, Jing; Zhang, Bing; Kinsinger, Christopher R; Mesri, Mehdi; Rodriguez, Henry; Ding, Li; Paulovich, Amanda G; Fenyö, David; Ellis, Matthew J; Carr, Steven A
2016-06-02
Somatic mutations have been extensively characterized in breast cancer, but the effects of these genetic alterations on the proteomic landscape remain poorly understood. Here we describe quantitative mass-spectrometry-based proteomic and phosphoproteomic analyses of 105 genomically annotated breast cancers, of which 77 provided high-quality data. Integrated analyses provided insights into the somatic cancer genome including the consequences of chromosomal loss, such as the 5q deletion characteristic of basal-like breast cancer. Interrogation of the 5q trans-effects against the Library of Integrated Network-based Cellular Signatures, connected loss of CETN3 and SKP1 to elevated expression of epidermal growth factor receptor (EGFR), and SKP1 loss also to increased SRC tyrosine kinase. Global proteomic data confirmed a stromal-enriched group of proteins in addition to basal and luminal clusters, and pathway analysis of the phosphoproteome identified a G-protein-coupled receptor cluster that was not readily identified at the mRNA level. In addition to ERBB2, other amplicon-associated highly phosphorylated kinases were identified, including CDK12, PAK1, PTK2, RIPK2 and TLK2. We demonstrate that proteogenomic analysis of breast cancer elucidates the functional consequences of somatic mutations, narrows candidate nominations for driver genes within large deletions and amplified regions, and identifies therapeutic targets.
A proteomic map of the unsequenced kala-azar vector Phlebotomus papatasi using cell line.
Pawar, Harsh; Chavan, Sandip; Mahale, Kiran; Khobragade, Sweta; Kulkarni, Aditi; Patil, Arun; Chaphekar, Deepa; Varriar, Pratyasha; Sudeep, Anakkathil; Pai, Kalpana; Prasad, T S K; Gowda, Harsha; Patole, Milind S
2015-12-01
The debilitating disease kala-azar or visceral leishmaniasis is caused by the kinetoplastid protozoan parasite Leishmania donovani. The parasite is transmitted by the hematophagous sand fly vector of the genus Phlebotomus in the old world and Lutzomyia in the new world. The predominant Phlebotomine species associated with the transmission of kala-azar are Phlebotomus papatasi and Phlebotomus argentipes. Understanding the molecular interaction of the sand fly and Leishmania, during the development of parasite within the sand fly gut is crucial to the understanding of the parasite life cycle. The complete genome sequences of sand flies (Phlebotomus and Lutzomyia) are currently not available and this hinders identification of proteins in the sand fly vector. The current study utilizes a three frame translated transcriptomic data of P. papatasi in the absence of genomic sequences to analyze the mass spectrometry data of P. papatasi cell line using a proteogenomic approach. Additionally, we have carried out the proteogenomic analysis of P. papatasi by comparative homology-based searches using related sequenced dipteran protein data. This study resulted in the identification of 1313 proteins from P. papatasi based on homology. Our study demonstrates the power of proteogenomic approaches in mapping the proteomes of unsequenced organisms. Copyright © 2015 Elsevier B.V. All rights reserved.
Discovery of rare protein-coding genes in model methylotroph Methylobacterium extorquens AM1.
Kumar, Dhirendra; Mondal, Anupam Kumar; Yadav, Amit Kumar; Dash, Debasis
2014-12-01
Proteogenomics involves the use of MS to refine annotation of protein-coding genes and discover genes in a genome. We carried out comprehensive proteogenomic analysis of Methylobacterium extorquens AM1 (ME-AM1) from publicly available proteomics data with a motive to improve annotation for methylotrophs; organisms capable of surviving in reduced carbon compounds such as methanol. Besides identifying 2482(50%) proteins, 29 new genes were discovered and 66 annotated gene models were revised in ME-AM1 genome. One such novel gene is identified with 75 peptides, lacks homolog in other methylobacteria but has glycosyl transferase and lipopolysaccharide biosynthesis protein domains, indicating its potential role in outer membrane synthesis. Many novel genes are present only in ME-AM1 among methylobacteria. Distant homologs of these genes in unrelated taxonomic classes and low GC-content of few genes suggest lateral gene transfer as a potential mode of their origin. Annotations of methylotrophy related genes were also improved by the discovery of a short gene in methylotrophy gene island and redefining a gene important for pyrroquinoline quinone synthesis, essential for methylotrophy. The combined use of proteogenomics and rigorous bioinformatics analysis greatly enhanced the annotation of protein-coding genes in model methylotroph ME-AM1 genome. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Proteogenomics Dashboard for the Human Proteome Project.
Tabas-Madrid, Daniel; Alves-Cruzeiro, Joao; Segura, Victor; Guruceaga, Elizabeth; Vialas, Vital; Prieto, Gorka; García, Carlos; Corrales, Fernando J; Albar, Juan Pablo; Pascual-Montano, Alberto
2015-09-04
dasHPPboard is a novel proteomics-based dashboard that collects and reports the experiments produced by the Spanish Human Proteome Project consortium (SpHPP) and aims to help HPP to map the entire human proteome. We have followed the strategy of analog genomics projects like the Encyclopedia of DNA Elements (ENCODE), which provides a vast amount of data on human cell lines experiments. The dashboard includes results of shotgun and selected reaction monitoring proteomics experiments, post-translational modifications information, as well as proteogenomics studies. We have also processed the transcriptomics data from the ENCODE and Human Body Map (HBM) projects for the identification of specific gene expression patterns in different cell lines and tissues, taking special interest in those genes having little proteomic evidence available (missing proteins). Peptide databases have been built using single nucleotide variants and novel junctions derived from RNA-Seq data that can be used in search engines for sample-specific protein identifications on the same cell lines or tissues. The dasHPPboard has been designed as a tool that can be used to share and visualize a combination of proteomic and transcriptomic data, providing at the same time easy access to resources for proteogenomics analyses. The dasHPPboard can be freely accessed at: http://sphppdashboard.cnb.csic.es.
Rubiano-Labrador, Carolina; Bland, Céline; Miotello, Guylaine; Armengaud, Jean; Baena, Sandra
2015-01-01
The ability of bacteria to adapt to external osmotic changes is fundamental for their survival. Halotolerant microorganisms, such as Tistlia consotensis, have to cope with continuous fluctuations in the salinity of their natural environments which require effective adaptation strategies against salt stress. Changes of extracellular protein profiles from Tistlia consotensis in conditions of low and high salinities were monitored by proteogenomics using a bacterial draft genome. At low salinity, we detected greater amounts of the HpnM protein which is involved in the biosynthesis of hopanoids. This may represent a novel, and previously unreported, strategy by halotolerant microorganisms to prevent the entry of water into the cell under conditions of low salinity. At high salinity, proteins associated with osmosensing, exclusion of Na+ and transport of compatible solutes, such as glycine betaine or proline are abundant. We also found that, probably in response to the high salt concentration, T. consotensis activated the synthesis of flagella and triggered a chemotactic response neither of which were observed at the salt concentration which is optimal for growth. Our study demonstrates that the exoproteome is an appropriate indicator of adaptive response of T. consotensis to changes in salinity because it allowed the identification of key proteins within its osmoadaptive mechanism that had not previously been detected in its cell proteome. PMID:26287734
Nishimura, Toshihide; Nakamura, Haruhiko
2016-01-01
Molecular therapies targeting lung cancers with mutated epidermal growth factor receptor (EGFR) by EGFR-tyrosin kinase inhibitors (EGFR-TKIs), gefitinib and erlotinib, changed the treatment system of lung cancer. It was revealed that drug efficacy differs by race (e.g., Caucasians vs. Asians) due to oncogenic driver mutations specific to each race, exemplified by gefitinib / erlotinib. The molecular target drugs for lung cancer with anaplastic lymphoma kinase (ALK) gene translocation (the fusion gene, EML4-ALK) was approved, and those targeting lung cancers addicted ROS1, RET, and HER2 have been under development. Both identification and quantification of gatekeeper mutations need to be performed using lung cancer tissue specimens obtained from patients to improve the treatment for lung cancer patients: (1) identification and quantitation data of targeted mutated proteins, including investigation of mutation heterogeneity within a tissue; (2) exploratory mass spectrometry (MS)-based clinical proteogenomic analysis of mutated proteins; and also importantly (3) analysis of dynamic protein-protein interaction (PPI) networks of proteins significantly related to a subgroup of patients with lung cancer not only with good efficacy but also with acquired resistance. MS-based proteogenomics is a promising approach to directly capture mutated and fusion proteins expressed in a clinical sample. Technological developments are further expected, which will provide a powerful solution for the stratification of patients and drug discovery (Precision Medicine).
Proteogenomic Analysis of a Thermophilic Bacterial Consortium Adapted to Deconstruct Switchgrass
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'haeseleer, Patrik; Gladden, John M.; Allgaier, Martin
2013-07-19
Thermophilic bacteria are a potential source of enzymes for the deconstruction of lignocellulosic biomass. However, the complement of proteins used to deconstruct biomass and the specific roles of different microbial groups in thermophilic biomass deconstruction are not well-explored. Here we report on the metagenomic and proteogenomic analyses of a compost-derived bacterial consortium adapted to switchgrass at elevated temperature with high levels of glycoside hydrolase activities. Near-complete genomes were reconstructed for the most abundant populations, which included composite genomes for populations closely related to sequenced strains of Thermus thermophilus and Rhodothermus marinus, and for novel populations that are related to thermophilicmore » Paenibacilli and an uncultivated subdivision of the littlestudied Gemmatimonadetes phylum. Partial genomes were also reconstructed for a number of lower abundance thermophilic Chloroflexi populations. Identification of genes for lignocellulose processing and metabolic reconstructions suggested Rhodothermus, Paenibacillus and Gemmatimonadetes as key groups for deconstructing biomass, and Thermus as a group that may primarily metabolize low molecular weight compounds. Mass spectrometry-based proteomic analysis of the consortium was used to identify .3000 proteins in fractionated samples from the cultures, and confirmed the importance of Paenibacillus and Gemmatimonadetes to biomass deconstruction. These studies also indicate that there are unexplored proteins with important roles in bacterial lignocellulose deconstruction.« less
Toward an Upgraded Honey Bee (Apis mellifera L.) Genome Annotation Using Proteogenomics.
McAfee, Alison; Harpur, Brock A; Michaud, Sarah; Beavis, Ronald C; Kent, Clement F; Zayed, Amro; Foster, Leonard J
2016-02-05
The honey bee is a key pollinator in agricultural operations as well as a model organism for studying the genetics and evolution of social behavior. The Apis mellifera genome has been sequenced and annotated twice over, enabling proteomics and functional genomics methods for probing relevant aspects of their biology. One troubling trend that emerged from proteomic analyses is that honey bee peptide samples consistently result in lower peptide identification rates compared with other organisms. This suggests that the genome annotation can be improved, or atypical biological processes are interfering with the mass spectrometry workflow. First, we tested whether high levels of polymorphisms could explain some of the missed identifications by searching spectra against the reference proteome (OGSv3.2) versus a customized proteome of a single honey bee, but our results indicate that this contribution was minor. Likewise, error-tolerant peptide searches lead us to eliminate unexpected post-translational modifications as a major factor in missed identifications. We then used a proteogenomic approach with ~1500 raw files to search for missing genes and new exons, to revive discarded annotations and to identify over 2000 new coding regions. These results will contribute to a more comprehensive genome annotation and facilitate continued research on this important insect.
Tellgren-Roth, Christian; Baudo, Charles D.; Kennell, John C.; Sun, Sheng; Billmyre, R. Blake; Schröder, Markus S.; Andersson, Anna; Holm, Tina; Sigurgeirsson, Benjamin; Wu, Guangxi; Sankaranarayanan, Sundar Ram; Siddharthan, Rahul; Sanyal, Kaustuv; Lundeberg, Joakim; Nystedt, Björn; Boekhout, Teun; Dawson, Thomas L.; Heitman, Joseph
2017-01-01
Abstract Complete and accurate genome assembly and annotation is a crucial foundation for comparative and functional genomics. Despite this, few complete eukaryotic genomes are available, and genome annotation remains a major challenge. Here, we present a complete genome assembly of the skin commensal yeast Malassezia sympodialis and demonstrate how proteogenomics can substantially improve gene annotation. Through long-read DNA sequencing, we obtained a gap-free genome assembly for M. sympodialis (ATCC 42132), comprising eight nuclear and one mitochondrial chromosome. We also sequenced and assembled four M. sympodialis clinical isolates, and showed their value for understanding Malassezia reproduction by confirming four alternative allele combinations at the two mating-type loci. Importantly, we demonstrated how proteomics data could be readily integrated with transcriptomics data in standard annotation tools. This increased the number of annotated protein-coding genes by 14% (from 3612 to 4113), compared to using transcriptomics evidence alone. Manual curation further increased the number of protein-coding genes by 9% (to 4493). All of these genes have RNA-seq evidence and 87% were confirmed by proteomics. The M. sympodialis genome assembly and annotation presented here is at a quality yet achieved only for a few eukaryotic organisms, and constitutes an important reference for future host-microbe interaction studies. PMID:28100699
CPTAC | Office of Cancer Clinical Proteomics Research
The National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) is a national effort to accelerate the understanding of the molecular basis of cancer through the application of large-scale proteome and genome analysis, or proteogenomics.
Integrated proteomic and genomic analysis of colorectal cancer
Investigators who analyzed 95 human colorectal tumor samples have determined how gene alterations identified in previous analyses of the same samples are expressed at the protein level. The integration of proteomic and genomic data, or proteogenomics, pro
Zhu, Yafeng; Engström, Pär G; Tellgren-Roth, Christian; Baudo, Charles D; Kennell, John C; Sun, Sheng; Billmyre, R Blake; Schröder, Markus S; Andersson, Anna; Holm, Tina; Sigurgeirsson, Benjamin; Wu, Guangxi; Sankaranarayanan, Sundar Ram; Siddharthan, Rahul; Sanyal, Kaustuv; Lundeberg, Joakim; Nystedt, Björn; Boekhout, Teun; Dawson, Thomas L; Heitman, Joseph; Scheynius, Annika; Lehtiö, Janne
2017-03-17
Complete and accurate genome assembly and annotation is a crucial foundation for comparative and functional genomics. Despite this, few complete eukaryotic genomes are available, and genome annotation remains a major challenge. Here, we present a complete genome assembly of the skin commensal yeast Malassezia sympodialis and demonstrate how proteogenomics can substantially improve gene annotation. Through long-read DNA sequencing, we obtained a gap-free genome assembly for M. sympodialis (ATCC 42132), comprising eight nuclear and one mitochondrial chromosome. We also sequenced and assembled four M. sympodialis clinical isolates, and showed their value for understanding Malassezia reproduction by confirming four alternative allele combinations at the two mating-type loci. Importantly, we demonstrated how proteomics data could be readily integrated with transcriptomics data in standard annotation tools. This increased the number of annotated protein-coding genes by 14% (from 3612 to 4113), compared to using transcriptomics evidence alone. Manual curation further increased the number of protein-coding genes by 9% (to 4493). All of these genes have RNA-seq evidence and 87% were confirmed by proteomics. The M. sympodialis genome assembly and annotation presented here is at a quality yet achieved only for a few eukaryotic organisms, and constitutes an important reference for future host-microbe interaction studies. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Omasits, Ulrich; Varadarajan, Adithi R; Schmid, Michael; Goetze, Sandra; Melidis, Damianos; Bourqui, Marc; Nikolayeva, Olga; Québatte, Maxime; Patrignani, Andrea; Dehio, Christoph; Frey, Juerg E; Robinson, Mark D; Wollscheid, Bernd; Ahrens, Christian H
2017-12-01
Accurate annotation of all protein-coding sequences (CDSs) is an essential prerequisite to fully exploit the rapidly growing repertoire of completely sequenced prokaryotic genomes. However, large discrepancies among the number of CDSs annotated by different resources, missed functional short open reading frames (sORFs), and overprediction of spurious ORFs represent serious limitations. Our strategy toward accurate and complete genome annotation consolidates CDSs from multiple reference annotation resources, ab initio gene prediction algorithms and in silico ORFs (a modified six-frame translation considering alternative start codons) in an integrated proteogenomics database (iPtgxDB) that covers the entire protein-coding potential of a prokaryotic genome. By extending the PeptideClassifier concept of unambiguous peptides for prokaryotes, close to 95% of the identifiable peptides imply one distinct protein, largely simplifying downstream analysis. Searching a comprehensive Bartonella henselae proteomics data set against such an iPtgxDB allowed us to unambiguously identify novel ORFs uniquely predicted by each resource, including lipoproteins, differentially expressed and membrane-localized proteins, novel start sites and wrongly annotated pseudogenes. Most novelties were confirmed by targeted, parallel reaction monitoring mass spectrometry, including unique ORFs and single amino acid variations (SAAVs) identified in a re-sequenced laboratory strain that are not present in its reference genome. We demonstrate the general applicability of our strategy for genomes with varying GC content and distinct taxonomic origin. We release iPtgxDBs for B. henselae , Bradyrhizobium diazoefficiens and Escherichia coli and the software to generate both proteogenomics search databases and integrated annotation files that can be viewed in a genome browser for any prokaryote. © 2017 Omasits et al.; Published by Cold Spring Harbor Laboratory Press.
Woo, Sunghee; Cha, Seong Won; Na, Seungjin; ...
2014-11-17
Cancer is driven by the acquisition of somatic DNA lesions. Distinguishing the early driver mutations from subsequent passenger mutations is key to molecular sub-typing of cancers, and the discovery of novel biomarkers. The availability of genomics technologies (mainly wholegenome and exome sequencing, and transcript sampling via RNA-seq, collectively referred to as NGS) have fueled recent studies on somatic mutation discovery. However, the vision is challenged by the complexity, redundancy, and errors in genomic data, and the difficulty of investigating the proteome using only genomic approaches. Recently, combination of proteomic and genomic technologies are increasingly employed. However, the complexity and redundancymore » of NGS data remains a challenge for proteogenomics, and various trade-offs must be made to allow for the searches to take place. This paperprovides a discussion of two such trade-offs, relating to large database search, and FDR calculations, and their implication to cancer proteogenomics. Moreover, it extends and develops the idea of a unified genomic variant database that can be searched by any mass spectrometry sample. A total of 879 BAM files downloaded from TCGA repository were used to create a 4.34 GB unified FASTA database which contained 2,787,062 novel splice junctions, 38,464 deletions, 1105 insertions, and 182,302 substitutions. Proteomic data from a single ovarian carcinoma sample (439,858 spectra) was searched against the database. By applying the most conservative FDR measure, we have identified 524 novel peptides and 65,578 known peptides at 1% FDR threshold. The novel peptides include interesting examples of doubly mutated peptides, frame-shifts, and non-sample-recruited mutations, which emphasize the strength of our approach.« less
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.
2018-02-01
In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.
Lee, Sang-Yeop; Kim, Gun-Hwa; Yun, Sung Ho; Choi, Chi-Won; Yi, Yoon-Sun; Kim, Jonghyun; Chung, Young-Ho; Park, Edmond Changkyun; Kim, Seung Il
2016-01-01
Burkholderia sp. K24, formerly known as Acinetobacter lwoffii K24, is a soil bacterium capable of utilizing aniline as its sole carbon and nitrogen source. Genomic sequence analysis revealed that this bacterium possesses putative gene clusters for biodegradation of various monocyclic aromatic hydrocarbons (MAHs), including benzene, toluene, and xylene (BTX), as well as aniline. We verified the proposed MAH biodegradation pathways by dioxygenase activity assays, RT-PCR, and LC/MS-based quantitative proteomic analyses. This proteogenomic approach revealed four independent degradation pathways, all converging into the citric acid cycle. Aniline and p-hydroxybenzoate degradation pathways converged into the β-ketoadipate pathway. Benzoate and toluene were degraded through the benzoyl-CoA degradation pathway. The xylene isomers, i.e., o-, m-, and p-xylene, were degraded via the extradiol cleavage pathways. Salicylate was degraded through the gentisate degradation pathway. Our results show that Burkholderia sp. K24 possesses versatile biodegradation pathways, which may be employed for efficient bioremediation of aniline and BTX.
Yun, Sung Ho; Choi, Chi-Won; Yi, Yoon-Sun; Kim, Jonghyun; Chung, Young-Ho; Park, Edmond Changkyun; Kim, Seung Il
2016-01-01
Burkholderia sp. K24, formerly known as Acinetobacter lwoffii K24, is a soil bacterium capable of utilizing aniline as its sole carbon and nitrogen source. Genomic sequence analysis revealed that this bacterium possesses putative gene clusters for biodegradation of various monocyclic aromatic hydrocarbons (MAHs), including benzene, toluene, and xylene (BTX), as well as aniline. We verified the proposed MAH biodegradation pathways by dioxygenase activity assays, RT-PCR, and LC/MS-based quantitative proteomic analyses. This proteogenomic approach revealed four independent degradation pathways, all converging into the citric acid cycle. Aniline and p-hydroxybenzoate degradation pathways converged into the β-ketoadipate pathway. Benzoate and toluene were degraded through the benzoyl-CoA degradation pathway. The xylene isomers, i.e., o-, m-, and p-xylene, were degraded via the extradiol cleavage pathways. Salicylate was degraded through the gentisate degradation pathway. Our results show that Burkholderia sp. K24 possesses versatile biodegradation pathways, which may be employed for efficient bioremediation of aniline and BTX. PMID:27124467
The Office of Cancer Clinical Proteomics Research at the National Cancer Institute, part of the United States National Institutes of Health, is spearheading the preparationand training of the proteogenomic research workforce on an international scale.
Rubiano-Labrador, Carolina; Bland, Céline; Miotello, Guylaine; Guérin, Philippe; Pible, Olivier; Baena, Sandra; Armengaud, Jean
2014-01-31
Tistlia consotensis is a halotolerant Rhodospirillaceae that was isolated from a saline spring located in the Colombian Andes with a salt concentration close to seawater (4.5%w/vol). We cultivated this microorganism in three NaCl concentrations, i.e. optimal (0.5%), without (0.0%) and high (4.0%) salt concentration, and analyzed its cellular proteome. For assigning tandem mass spectrometry data, we first sequenced its genome and constructed a six reading frame ORF database from the draft sequence. We annotated only the genes whose products (872) were detected. We compared the quantitative proteome data sets recorded for the three different growth conditions. At low salinity general stress proteins (chaperons, proteases and proteins associated with oxidative stress protection), were detected in higher amounts, probably linked to difficulties for proper protein folding and metabolism. Proteogenomics and comparative genomics pointed at the CrgA transcriptional regulator as a key-factor for the proteome remodeling upon low osmolarity. In hyper-osmotic condition, T. consotensis produced in larger amounts proteins involved in the sensing of changes in salt concentration, as well as a wide panel of transport systems for the transport of organic compatible solutes such as glutamate. We have described here a straightforward procedure in making a new environmental isolate quickly amenable to proteomics. The bacterium Tistlia consotensis was isolated from a saline spring in the Colombian Andes and represents an interesting environmental model to be compared with extremophiles or other moderate organisms. To explore the halotolerance molecular mechanisms of the bacterium T. consotensis, we developed an innovative proteogenomic strategy consisting of i) genome sequencing, ii) quick annotation of the genes whose products were detected by mass spectrometry, and iii) comparative proteomics of cells grown in three salt conditions. We highlighted in this manuscript how efficient such an approach can be compared to time-consuming genome annotation when pointing at the key proteins of a given biological question. We documented a large number of proteins found produced in greater amounts when cells are cultivated in either hypo-osmotic or hyper-osmotic conditions. This article is part of a Special Issue entitled: Trends in Microbial Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
USDA-ARS?s Scientific Manuscript database
The Glossina pallidipes salivary gland hypertrophy virus (GpSGHV; family Hytrosaviridae) can establish a chronic covert asymptomatic infection and an acute overt symptomatic infection in its tsetse fly host (Diptera: Glossinidae). Expression of the disease symptoms, the salivary gland hypertrophy sy...
The White House Office of the Vice President has announced the signing of three Memoranda of Understanding (MOUs) that will make available an unprecedented international dataset to advance cancer research and care.
Software Tools | Office of Cancer Clinical Proteomics Research
The CPTAC program develops new approaches to elucidate aspects of the molecular complexity of cancer made from large-scale proteogenomic datasets, and advance them toward precision medicine. Part of the CPTAC mission is to make data and tools available and accessible to the greater research community to accelerate the discovery process.
Common bean proteomics: Present status and future strategies.
Zargar, Sajad Majeed; Mahajan, Reetika; Nazir, Muslima; Nagar, Preeti; Kim, Sun Tae; Rai, Vandna; Masi, Antonio; Ahmad, Syed Mudasir; Shah, Riaz Ahmad; Ganai, Nazir Ahmad; Agrawal, Ganesh K; Rakwal, Randeep
2017-10-03
Common bean (Phaseolus vulgaris L.) is a legume of appreciable importance and usefulness worldwide to the human population providing food and feed. It is rich in high-quality protein, energy, fiber and micronutrients especially iron, zinc, and pro-vitamin A; and possesses potentially disease-preventing and health-promoting compounds. The recently published genome sequence of common bean is an important landmark in common bean research, opening new avenues for understanding its genetics in depth. This legume crop is affected by diverse biotic and abiotic stresses severely limiting its productivity. Looking at the trend of increasing world population and the need for food crops best suited to the health of humankind, the legumes will be in great demand, including the common bean mostly for its nutritive values. Hence the need for new research in understanding the biology of this crop brings us to utilize and apply high-throughput omics approaches. In this mini-review our focus will be on the need for proteomics studies in common bean, potential of proteomics for understanding genetic regulation under abiotic and biotic stresses and how proteogenomics will lead to nutritional improvement. We will also discuss future proteomics-based strategies that must be adopted to mine new genomic resources by identifying molecular switches regulating various biological processes. Common bean is regarded as "grain of hope" for the poor, being rich in high-quality protein, energy, fiber and micronutrients (iron, zinc, pro-vitamin A); and possesses potentially disease-preventing and health-promoting compounds. Increasing world population and the need for food crops best suited to the health of humankind, puts legumes into great demand, which includes the common bean mostly. An important landmark in common bean research was the recent publication of its genome sequence, opening new avenues for understanding its genetics in depth. This legume crop is affected by diverse biotic and abiotic stresses severely limiting its productivity. Therefore, the need for new research in understanding the biology of this crop brings us to utilize and apply high-throughput omics approaches. Proteomics can be used to track all the candidate proteins/genes responsible for a biological process under specific conditions in a particular tissue. The potential of proteomics will not only help in determining the functions of a large number of genes in a single experiment but will also be a useful tool to mine new genes that can provide solution to various problems (abiotic stress, biotic stress, nutritional improvement, etc). We believe that a combined approach including breeding along with omics tools will lead towards attaining sustainability in legumes, including common bean. Copyright © 2017 Elsevier B.V. All rights reserved.
The National Cancer Institute will hold a public Pre-Application webinar on Wednesday, January 13, 2016 at 12:00 p.m. (EST) for the Funding Opportunity Announcement (FOA) RFA-CA-15-022 entitled “Proteogenomic Translational Research Centers for Clinical Proteomic Tumor Analysis Consortium (U01).”
Office Overview | Office of Cancer Clinical Proteomics Research
The Office of Cancer Clinical Proteomics Research (OCCPR) at the National Cancer Institute (NCI) aims to improve prevention, early detection, diagnosis, and treatment of cancer by enhancing the understanding of the molecular mechanisms of cancer, advancing proteome/proteogenome science and technology development through community resources (data and reagents), and accelerating the translation of molecular findings into the clinic.
News Release: May 25, 2016 — Building on data from The Cancer Genome Atlas (TCGA) project, a multi-institutional team of scientists has completed the first large-scale “proteogenomic” study of breast cancer, linking DNA mutations to protein signaling and helping pinpoint the genes that drive cancer.
The Human Proteome Organization (HUPO) and Prof Steve Pennington, UCD, chair of the organizing committee of HUPO2017 (the 16th HUPO World Congress) in collaboration with the National Cancer Institute’s (NCI) International Cancer Proteogenome Consortium (ICPC) ann
A Proteogenomic Approach to Understanding MYC Function in Metastatic Medulloblastoma Tumors.
Staal, Jerome A; Pei, Yanxin; Rood, Brian R
2016-10-19
Brain tumors are the leading cause of cancer-related deaths in children, and medulloblastoma is the most prevalent malignant childhood/pediatric brain tumor. Providing effective treatment for these cancers, with minimal damage to the still-developing brain, remains one of the greatest challenges faced by clinicians. Understanding the diverse events driving tumor formation, maintenance, progression, and recurrence is necessary for identifying novel targeted therapeutics and improving survival of patients with this disease. Genomic copy number alteration data, together with clinical studies, identifies c-MYC amplification as an important risk factor associated with the most aggressive forms of medulloblastoma with marked metastatic potential. Yet despite this, very little is known regarding the impact of such genomic abnormalities upon the functional biology of the tumor cell. We discuss here how recent advances in quantitative proteomic techniques are now providing new insights into the functional biology of these aggressive tumors, as illustrated by the use of proteomics to bridge the gap between the genotype and phenotype in the case of c-MYC -amplified/associated medulloblastoma. These integrated proteogenomic approaches now provide a new platform for understanding cancer biology by providing a functional context to frame genomic abnormalities.
Integrated Proteogenomic Characterization of Human High-Grade Serous Ovarian Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hui; Liu, Tao; Zhang, Zhen
Ovarian cancer remains the most lethal gynecological malignancy in the developed world, despite recent advances in genomic information and treatment. To better understand this disease, define an integrated proteogenomic landscape, and identify factors associated with homologous repair deficiency (HRD) and overall survival, we performed a comprehensive proteomic characterization of ovarian high-grade serous carcinomas (HGSC) previously characterized by The Cancer Genome Atlas (TCGA). We observed that messenger RNA transcript abundance did not reliably predict abundance for 10,030 proteins across 174 tumors. Clustering of tumors based on protein abundance identified five subtypes, two of which correlated robustly with mesenchymal and proliferative subtypes,more » while tumors characterized as immunoreactive or differentiated at the transcript level were intermixed at the protein level. At the genome level, HGSC is characterized by a complex landscape of somatic copy number alterations (CNA), which individually do not correlate significantly with survival. Correlation of CNAs with protein abundances identified loci with significant trans regulatory effects mapping to pathways associated with proliferation, cell motility/invasion, and immune regulation, three known hallmarks of cancer. Using the trans regulated proteins we also created models significantly correlated with patient survival by multivariate analysis. Integrating protein abundance with specific post-translational modification data identified subnetworks correlated with HRD status; specifically, acetylation of Lys12 and Lys16 on histone H4 was associated with HRD status. Using quantitative phosphoproteomics data covering 4,420 proteins as reflective of pathway activity, we identified the PDGFR and VEGFR signaling pathways as significantly up-regulated in patients with short overall survival, independent of PDGFR and VEGFR protein levels, potentially informing the use of anti-angiogenic therapies. Components of the Rho/Rac/Cdc42 cell motility pathways were also significantly enriched for short survival. Overall, proteomics provided new insights into ovarian cancer not apparent from genomic analysis and enabling a deeper understanding of HGSC with the potential to inform targeted therapeutics.« less
CPTAC Announces New PTRCs, PCCs, and PGDACs | Office of Cancer Clinical Proteomics Research
This week, the Office of Cancer Clinical Proteomics Research (OCCPR) at the National Cancer Institute (NCI), part of the National Institutes of Health, announced its aim to further the convergence of proteomics with genomics – “proteogenomics,” to better understand the molecular basis of cancer and accelerate research in these areas by disseminating research resources to the scientific community.
Bindschedler, Laurence V.; Burgis, Timothy A.; Mills, Davinia J. S.; Ho, Jenny T. C.; Cramer, Rainer; Spanu, Pietro D.
2009-01-01
To further our understanding of powdery mildew biology during infection, we undertook a systematic shotgun proteomics analysis of the obligate biotroph Blumeria graminis f. sp. hordei at different stages of development in the host. Moreover we used a proteogenomics approach to feed information into the annotation of the newly sequenced genome. We analyzed and compared the proteomes from three stages of development representing different functions during the plant-dependent vegetative life cycle of this fungus. We identified 441 proteins in ungerminated spores, 775 proteins in epiphytic sporulating hyphae, and 47 proteins from haustoria inside barley leaf epidermal cells and used the data to aid annotation of the B. graminis f. sp. hordei genome. We also compared the differences in the protein complement of these key stages. Although confirming some of the previously reported findings and models derived from the analysis of transcriptome dynamics, our results also suggest that the intracellular haustoria are subject to stress possibly as a result of the plant defense strategy, including the production of reactive oxygen species. In addition, a number of small haustorial proteins with a predicted N-terminal signal peptide for secretion were identified in infected tissues: these represent candidate effector proteins that may play a role in controlling host metabolism and immunity. PMID:19602707
Weckwerth, Wolfram; Wienkoop, Stefanie; Hoehenwarter, Wolfgang; Egelhofer, Volker; Sun, Xiaoliang
2014-01-01
Genome sequencing and systems biology are revolutionizing life sciences. Proteomics emerged as a fundamental technique of this novel research area as it is the basis for gene function analysis and modeling of dynamic protein networks. Here a complete proteomics platform suited for functional genomics and systems biology is presented. The strategy includes MAPA (mass accuracy precursor alignment; http://www.univie.ac.at/mosys/software.html ) as a rapid exploratory analysis step; MASS WESTERN for targeted proteomics; COVAIN ( http://www.univie.ac.at/mosys/software.html ) for multivariate statistical analysis, data integration, and data mining; and PROMEX ( http://www.univie.ac.at/mosys/databases.html ) as a database module for proteogenomics and proteotypic peptides for targeted analysis. Moreover, the presented platform can also be utilized to integrate metabolomics and transcriptomics data for the analysis of metabolite-protein-transcript correlations and time course analysis using COVAIN. Examples for the integration of MAPA and MASS WESTERN data, proteogenomic and metabolic modeling approaches for functional genomics, phosphoproteomics by integration of MOAC (metal-oxide affinity chromatography) with MAPA, and the integration of metabolomics, transcriptomics, proteomics, and physiological data using this platform are presented. All software and step-by-step tutorials for data processing and data mining can be downloaded from http://www.univie.ac.at/mosys/software.html.
Proteogenomic Investigation of Strain Variation in Clinical Mycobacterium tuberculosis Isolates.
Heunis, Tiaan; Dippenaar, Anzaan; Warren, Robin M; van Helden, Paul D; van der Merwe, Ruben G; Gey van Pittius, Nicolaas C; Pain, Arnab; Sampson, Samantha L; Tabb, David L
2017-10-06
Mycobacterium tuberculosis consists of a large number of different strains that display unique virulence characteristics. Whole-genome sequencing has revealed substantial genetic diversity among clinical M. tuberculosis isolates, and elucidating the phenotypic variation encoded by this genetic diversity will be of the utmost importance to fully understand M. tuberculosis biology and pathogenicity. In this study, we integrated whole-genome sequencing and mass spectrometry (GeLC-MS/MS) to reveal strain-specific characteristics in the proteomes of two clinical M. tuberculosis Latin American-Mediterranean isolates. Using this approach, we identified 59 peptides containing single amino acid variants, which covered ∼9% of all coding nonsynonymous single nucleotide variants detected by whole-genome sequencing. Furthermore, we identified 29 distinct peptides that mapped to a hypothetical protein not present in the M. tuberculosis H37Rv reference proteome. Here, we provide evidence for the expression of this protein in the clinical M. tuberculosis SAWC3651 isolate. The strain-specific databases enabled confirmation of genomic differences (i.e., large genomic regions of difference and nonsynonymous single nucleotide variants) in these two clinical M. tuberculosis isolates and allowed strain differentiation at the proteome level. Our results contribute to the growing field of clinical microbial proteogenomics and can improve our understanding of phenotypic variation in clinical M. tuberculosis isolates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Qiuming; Li, Zhou; Song, Yang
Phosphorus (P) is a scarce nutrient in many tropical ecosystems, yet how soil microbial communities cope with growth-limiting P deficiency at the gene and protein levels remains unknown. Here we report a metagenomic and metaproteomic comparison of microbial communities in P-deficient and P-rich soils in a 17-year fertilization experiment in a tropical forest. The large-scale proteogenomics analyses provided extensive coverage of many microbial functions and taxa in the complex soil communities. A >4-fold increase in the gene abundance of 3-phytase was the strongest response of soil communities to P deficiency. Phytase catalyzes the release of phosphate from phytate, the mostmore » recalcitrant P-containing compound in soil organic matter. Genes and proteins for the degradation of P-containing nucleic acids and phospholipids as well as the decomposition of labile carbon and nitrogen were also enhanced in the P-deficient soils. In contrast, microbial communities in the P-rich soils showed increased gene abundances for the degradation of recalcitrant aromatic compounds, the transformation of nitrogenous compounds, and the assimilation of sulfur. Overall, these results demonstrate the adaptive allocation of genes and proteins in soil microbial communities in response to shifting nutrient constraints.« less
Integrated proteogenomic characterization of human high grade serous ovarian cancer
Zhang, Bai; McDermott, Jason E; Zhou, Jian-Ying; Petyuk, Vladislav A; Chen, Li; Ray, Debjit; Sun, Shisheng; Yang, Feng; Chen, Lijun; Wang, Jing; Shah, Punit; Cha, Seong Won; Aiyetan, Paul; Woo, Sunghee; Tian, Yuan; Gritsenko, Marina A; Clauss, Therese R; Choi, Caitlin; Monroe, Matthew E; Thomas, Stefani; Nie, Song; Wu, Chaochao; Moore, Ronald J; Yu, Kun-Hsing; Tabb, David L; Fenyö, David; Bafna, Vineet; Wang, Yue; Rodriguez, Henry; Boja, Emily S; Hiltke, Tara; Rivers, Robert C; Sokoll, Lori; Zhu, Heng; Shih, Ie-Ming; Cope, Leslie; Pandey, Akhilesh; Zhang, Bing; Snyder, Michael P; Levine, Douglas A; Smith, Richard D
2016-01-01
SUMMARY To provide a detailed analysis of the molecular components and underlying mechanisms associated with ovarian cancer, we performed a comprehensive mass spectrometry-based proteomic characterization of 174 ovarian tumors previously analyzed by The Cancer Genome Atlas (TCGA), of which 169 were high-grade serous carcinomas (HGSC). Integrating our proteomic measurements with the genomic data yielded a number of insights into disease such as how different copy number alternations influence the proteome, the proteins associated with chromosomal instability, the sets of signaling pathways that diverse genome rearrangements converge on, as well as the ones most associated with short overall survival. Specific protein acetylations associated with homologous recombination deficiency suggest a potential means for stratifying patients for therapy. In addition to providing a valuable resource, these findings provide a view of how the somatic genome drives the cancer proteome and associations between protein and post-translational modification levels and clinical outcomes in HGSC. PMID:27372738
Unravelling ancient microbial history with community proteogenomics and lipid geochemistry.
Brocks, Jochen J; Banfield, Jillian
2009-08-01
Our window into the Earth's ancient microbial past is narrow and obscured by missing data. However, we can glean information about ancient microbial ecosystems using fossil lipids (biomarkers) that are extracted from billion-year-old sedimentary rocks. In this Opinion article, we describe how environmental genomics and related methodologies will give molecular fossil research a boost, by increasing our knowledge about how evolutionary innovations in microorganisms have changed the surface of planet Earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggles, Kelly V.; Tang, Zuojian; Wang, Xuya
Improvements in mass spectrometry (MS)-based peptide sequencing provide a new opportunity to determine whether polymorphisms, mutations and splice variants identified in cancer cells are translated. Herein we therefore describe a proteogenomic data integration tool (QUILTS) and illustrate its application to whole genome, transcriptome and global MS peptide sequence datasets generated from a pair of luminal and basal-like breast cancer patient derived xenografts (PDX). The sensitivity of proteogenomic analysis for singe nucleotide variant (SNV) expression and novel splice junction (NSJ) detection was probed using multiple MS/MS process replicates. Despite over thirty sample replicates, only about 10% of all SNV (somatic andmore » germline) were detected by both DNA and RNA sequencing were observed as peptides. An even smaller proportion of peptides corresponding to NSJ observed by RNA sequencing were detected (<0.1%). Peptides mapping to DNA-detected SNV without a detectable mRNA transcript were also observed demonstrating the transcriptome coverage was also incomplete (~80%). In contrast to germ-line variants, somatic variants were less likely to be detected at the peptide level in the basal-like tumor than the luminal tumor raising the possibility of differential translation or protein degradation effects. In conclusion, the QUILTS program integrates DNA, RNA and peptide sequencing to assess the degree to which somatic mutations are translated and therefore biologically active. By identifying gaps in sequence coverage QUILTS benchmarks current technology and assesses progress towards whole cancer proteome and transcriptome analysis.« less
Strategic and Operational Plan for Integrating Transcriptomics ...
Plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT; the details are in the attached slide presentation presentation on plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT, given at the OECD meeting on June 23, 2016
High-Throughput Experimental Approach Capabilities | Materials Science |
NREL High-Throughput Experimental Approach Capabilities High-Throughput Experimental Approach by yellow and is for materials in the upper right sector. NREL's high-throughput experimental ,Te) and oxysulfide sputtering Combi-5: Nitrides and oxynitride sputtering We also have several non
Pacific Northwest National Laboratory (PNNL) investigators in the Clinical Proteomic Tumor Analysis Consortium (CPTAC) of the National Cancer Institute (NCI), announces the public release of 98 targeted mass spectrometry-based assays for ovarian cancer research studies. Chosen based on proteogenomic observations from the recently published multi-institutional collaborative project between PNNL and Johns Hopkins University that comprehensively examined the collections of proteins in the tumors of ovarian cancer patients (highlighted in a paper in
An estimated 252,710 new cases of female breast cancer, accounting for 15% of all new cancer cases, occurred in 2017. To better understand proteogenomic abnormalities in breast cancer, the National Cancer Institute (NCI) Clinical Proteomic Tumor Analysis Consortium (CPTAC) announces the release of the cancer proteome confirmatory breast study data. The goal of the study was to comprehensively characterize the proteome and phosphoproteome on approximately 100 prospectively collected breast tumor and adjacent normal tissues.
Integrated Proteogenomic Characterization of Human High-Grade Serous Ovarian Cancer.
Zhang, Hui; Liu, Tao; Zhang, Zhen; Payne, Samuel H; Zhang, Bai; McDermott, Jason E; Zhou, Jian-Ying; Petyuk, Vladislav A; Chen, Li; Ray, Debjit; Sun, Shisheng; Yang, Feng; Chen, Lijun; Wang, Jing; Shah, Punit; Cha, Seong Won; Aiyetan, Paul; Woo, Sunghee; Tian, Yuan; Gritsenko, Marina A; Clauss, Therese R; Choi, Caitlin; Monroe, Matthew E; Thomas, Stefani; Nie, Song; Wu, Chaochao; Moore, Ronald J; Yu, Kun-Hsing; Tabb, David L; Fenyö, David; Bafna, Vineet; Wang, Yue; Rodriguez, Henry; Boja, Emily S; Hiltke, Tara; Rivers, Robert C; Sokoll, Lori; Zhu, Heng; Shih, Ie-Ming; Cope, Leslie; Pandey, Akhilesh; Zhang, Bing; Snyder, Michael P; Levine, Douglas A; Smith, Richard D; Chan, Daniel W; Rodland, Karin D
2016-07-28
To provide a detailed analysis of the molecular components and underlying mechanisms associated with ovarian cancer, we performed a comprehensive mass-spectrometry-based proteomic characterization of 174 ovarian tumors previously analyzed by The Cancer Genome Atlas (TCGA), of which 169 were high-grade serous carcinomas (HGSCs). Integrating our proteomic measurements with the genomic data yielded a number of insights into disease, such as how different copy-number alternations influence the proteome, the proteins associated with chromosomal instability, the sets of signaling pathways that diverse genome rearrangements converge on, and the ones most associated with short overall survival. Specific protein acetylations associated with homologous recombination deficiency suggest a potential means for stratifying patients for therapy. In addition to providing a valuable resource, these findings provide a view of how the somatic genome drives the cancer proteome and associations between protein and post-translational modification levels and clinical outcomes in HGSC. VIDEO ABSTRACT. Copyright © 2016 Elsevier Inc. All rights reserved.
High Throughput PBTK: Open-Source Data and Tools for ...
Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy
Stewart, Paul A; Parapatics, Katja; Welsh, Eric A; Müller, André C; Cao, Haoyun; Fang, Bin; Koomen, John M; Eschrich, Steven A; Bennett, Keiryn L; Haura, Eric B
2015-01-01
We performed a pilot proteogenomic study to compare lung adenocarcinoma to lung squamous cell carcinoma using quantitative proteomics (6-plex TMT) combined with a customized Affymetrix GeneChip. Using MaxQuant software, we identified 51,001 unique peptides that mapped to 7,241 unique proteins and from these identified 6,373 genes with matching protein expression for further analysis. We found a minor correlation between gene expression and protein expression; both datasets were able to independently recapitulate known differences between the adenocarcinoma and squamous cell carcinoma subtypes. We found 565 proteins and 629 genes to be differentially expressed between adenocarcinoma and squamous cell carcinoma, with 113 of these consistently differentially expressed at both the gene and protein levels. We then compared our results to published adenocarcinoma versus squamous cell carcinoma proteomic data that we also processed with MaxQuant. We selected two proteins consistently overexpressed in squamous cell carcinoma in all studies, MCT1 (SLC16A1) and GLUT1 (SLC2A1), for further investigation. We found differential expression of these same proteins at the gene level in our study as well as in other public gene expression datasets. These findings combined with survival analysis of public datasets suggest that MCT1 and GLUT1 may be potential prognostic markers in adenocarcinoma and druggable targets in squamous cell carcinoma. Data are available via ProteomeXchange with identifier PXD002622.
Application of ToxCast High-Throughput Screening and ...
Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors
Pietiainen, Vilja; Saarela, Jani; von Schantz, Carina; Turunen, Laura; Ostling, Paivi; Wennerberg, Krister
2014-05-01
The High Throughput Biomedicine (HTB) unit at the Institute for Molecular Medicine Finland FIMM was established in 2010 to serve as a national and international academic screening unit providing access to state of the art instrumentation for chemical and RNAi-based high throughput screening. The initial focus of the unit was multiwell plate based chemical screening and high content microarray-based siRNA screening. However, over the first four years of operation, the unit has moved to a more flexible service platform where both chemical and siRNA screening is performed at different scales primarily in multiwell plate-based assays with a wide range of readout possibilities with a focus on ultraminiaturization to allow for affordable screening for the academic users. In addition to high throughput screening, the equipment of the unit is also used to support miniaturized, multiplexed and high throughput applications for other types of research such as genomics, sequencing and biobanking operations. Importantly, with the translational research goals at FIMM, an increasing part of the operations at the HTB unit is being focused on high throughput systems biological platforms for functional profiling of patient cells in personalized and precision medicine projects.
High Throughput Screening For Hazard and Risk of Environmental Contaminants
High throughput toxicity testing provides detailed mechanistic information on the concentration response of environmental contaminants in numerous potential toxicity pathways. High throughput screening (HTS) has several key advantages: (1) expense orders of magnitude less than an...
High Throughput Transcriptomics: From screening to pathways
The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...
NASA Astrophysics Data System (ADS)
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-12-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-01-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
June 29, 2016 — In what is believed to be the largest study of its kind, scientists at Johns Hopkins University (JHU) and Pacific Northwest National Laboratory (PNNL) led a multi-institutional collaborative project that comprehensively examined the collections of proteins in the tumors of 169 ovarian cancer patients to identify critical proteins expressed by their tumors. By integrating their findings about the collection of proteins (the proteome) with information already known about the tumors’ genetic data (the genome), the investigators r
The Applied Proteogenomics OrganizationaL Learning and Outcomes (APOLLO) network, which is a partnership among the National Cancer Institute, Department of Defense, and Department of Veterans Affairs, has tapped the Paulovich Laboratory at Fred Hutchinson Cancer Research Center to create a panel of tests to measure key proteins that can serve as markers for tumors. The effort could ultimately lead to treatments that are more specifically targeted to a patient’s distinct type of cancer.
Investigators from the National Cancer Institute's Clinical Proteomic Tumor Analysis Consortium (CPTAC) who comprehensively analyzed 95 human colorectal tumor samples, have determined how gene alterations identified in previous analyses of the same samples are expressed at the protein level. The integration of proteomic and genomic data, or proteogenomics, provides a more comprehensive view of the biological features that drive cancer than genomic analysis alone and may help identify the most important targets for cancer detection and intervention.
A Pan-Cancer Proteogenomic Atlas of PI3K/AKT/mTOR Pathway Alterations | Office of Cancer Genomics
Molecular alterations involving the PI3K/Akt/mTOR pathway (including mutation, copy number, protein, or RNA) were examined across 11,219 human cancers representing 32 major types. Within specific mutated genes, frequency, mutation hotspot residues, in silico predictions, and functional assays were all informative in distinguishing the subset of genetic variants more likely to have functional relevance. Multiple oncogenic pathways including PI3K/Akt/mTOR converged on similar sets of downstream transcriptional targets.
High Throughput Experimental Materials Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakutayev, Andriy; Perkins, John; Schwarting, Marcus
The mission of the High Throughput Experimental Materials Database (HTEM DB) is to enable discovery of new materials with useful properties by releasing large amounts of high-quality experimental data to public. The HTEM DB contains information about materials obtained from high-throughput experiments at the National Renewable Energy Laboratory (NREL).
20180311 - High Throughput Transcriptomics: From screening to pathways (SOT 2018)
The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...
Evaluation of Sequencing Approaches for High-Throughput Transcriptomics - (BOSC)
Whole-genome in vitro transcriptomics has shown the capability to identify mechanisms of action and estimates of potency for chemical-mediated effects in a toxicological framework, but with limited throughput and high cost. The generation of high-throughput global gene expression...
High Throughput Determination of Critical Human Dosing Parameters (SOT)
High throughput toxicokinetics (HTTK) is a rapid approach that uses in vitro data to estimate TK for hundreds of environmental chemicals. Reverse dosimetry (i.e., reverse toxicokinetics or RTK) based on HTTK data converts high throughput in vitro toxicity screening (HTS) data int...
High Throughput Determinations of Critical Dosing Parameters (IVIVE workshop)
High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...
Optimization of high-throughput nanomaterial developmental toxicity testing in zebrafish embryos
Nanomaterial (NM) developmental toxicities are largely unknown. With an extensive variety of NMs available, high-throughput screening methods may be of value for initial characterization of potential hazard. We optimized a zebrafish embryo test as an in vivo high-throughput assay...
Jung, Seung-Yong; Notton, Timothy; Fong, Erika; ...
2015-01-07
Particle sorting using acoustofluidics has enormous potential but widespread adoption has been limited by complex device designs and low throughput. Here, we report high-throughput separation of particles and T lymphocytes (600 μL min -1) by altering the net sonic velocity to reposition acoustic pressure nodes in a simple two-channel device. Finally, the approach is generalizable to other microfluidic platforms for rapid, high-throughput analysis.
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-01-01
Abstract Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure–property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure–property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure–property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials. PMID:28458737
High-throughput screening (HTS) and modeling of the retinoid ...
Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system
Evaluating High Throughput Toxicokinetics and Toxicodynamics for IVIVE (WC10)
High-throughput screening (HTS) generates in vitro data for characterizing potential chemical hazard. TK models are needed to allow in vitro to in vivo extrapolation (IVIVE) to real world situations. The U.S. EPA has created a public tool (R package “httk” for high throughput tox...
High-throughput RAD-SNP genotyping for characterization of sugar beet genotypes
USDA-ARS?s Scientific Manuscript database
High-throughput SNP genotyping provides a rapid way of developing resourceful set of markers for delineating the genetic architecture and for effective species discrimination. In the presented research, we demonstrate a set of 192 SNPs for effective genotyping in sugar beet using high-throughput mar...
Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays (SOT)
Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...
A quantitative literature-curated gold standard for kinase-substrate pairs
2011-01-01
We describe the Yeast Kinase Interaction Database (KID, http://www.moseslab.csb.utoronto.ca/KID/), which contains high- and low-throughput data relevant to phosphorylation events. KID includes 6,225 low-throughput and 21,990 high-throughput interactions, from greater than 35,000 experiments. By quantitatively integrating these data, we identified 517 high-confidence kinase-substrate pairs that we consider a gold standard. We show that this gold standard can be used to assess published high-throughput datasets, suggesting that it will enable similar rigorous assessments in the future. PMID:21492431
High-Throughput Industrial Coatings Research at The Dow Chemical Company.
Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T
2016-09-12
At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.
Tiersch, Terrence R.; Yang, Huiping; Hu, E.
2011-01-01
With the development of genomic research technologies, comparative genome studies among vertebrate species are becoming commonplace for human biomedical research. Fish offer unlimited versatility for biomedical research. Extensive studies are done using these fish models, yielding tens of thousands of specific strains and lines, and the number is increasing every day. Thus, high-throughput sperm cryopreservation is urgently needed to preserve these genetic resources. Although high-throughput processing has been widely applied for sperm cryopreservation in livestock for decades, application in biomedical model fishes is still in the concept-development stage because of the limited sample volumes and the biological characteristics of fish sperm. High-throughput processing in livestock was developed based on advances made in the laboratory and was scaled up for increased processing speed, capability for mass production, and uniformity and quality assurance. Cryopreserved germplasm combined with high-throughput processing constitutes an independent industry encompassing animal breeding, preservation of genetic diversity, and medical research. Currently, there is no specifically engineered system available for high-throughput of cryopreserved germplasm for aquatic species. This review is to discuss the concepts and needs for high-throughput technology for model fishes, propose approaches for technical development, and overview future directions of this approach. PMID:21440666
Multi-Omics Driven Assembly and Annotation of the Sandalwood (Santalum album) Genome.
Mahesh, Hirehally Basavarajegowda; Subba, Pratigya; Advani, Jayshree; Shirke, Meghana Deepak; Loganathan, Ramya Malarini; Chandana, Shankara Lingu; Shilpa, Siddappa; Chatterjee, Oishi; Pinto, Sneha Maria; Prasad, Thottethodi Subrahmanya Keshava; Gowda, Malali
2018-04-01
Indian sandalwood ( Santalum album ) is an important tropical evergreen tree known for its fragrant heartwood-derived essential oil and its valuable carving wood. Here, we applied an integrated genomic, transcriptomic, and proteomic approach to assemble and annotate the Indian sandalwood genome. Our genome sequencing resulted in the establishment of a draft map of the smallest genome for any woody tree species to date (221 Mb). The genome annotation predicted 38,119 protein-coding genes and 27.42% repetitive DNA elements. In-depth proteome analysis revealed the identities of 72,325 unique peptides, which confirmed 10,076 of the predicted genes. The addition of transcriptomic and proteogenomic approaches resulted in the identification of 53 novel proteins and 34 gene-correction events that were missed by genomic approaches. Proteogenomic analysis also helped in reassigning 1,348 potential noncoding RNAs as bona fide protein-coding messenger RNAs. Gene expression patterns at the RNA and protein levels indicated that peptide sequencing was useful in capturing proteins encoded by nuclear and organellar genomes alike. Mass spectrometry-based proteomic evidence provided an unbiased approach toward the identification of proteins encoded by organellar genomes. Such proteins are often missed in transcriptome data sets due to the enrichment of only messenger RNAs that contain poly(A) tails. Overall, the use of integrated omic approaches enhanced the quality of the assembly and annotation of this nonmodel plant genome. The availability of genomic, transcriptomic, and proteomic data will enhance genomics-assisted breeding, germplasm characterization, and conservation of sandalwood trees. © 2018 American Society of Plant Biologists. All Rights Reserved.
Huang, Kuo-Sen; Mark, David; Gandenberger, Frank Ulrich
2006-01-01
The plate::vision is a high-throughput multimode reader capable of reading absorbance, fluorescence, fluorescence polarization, time-resolved fluorescence, and luminescence. Its performance has been shown to be quite comparable with other readers. When the reader is integrated into the plate::explorer, an ultrahigh-throughput screening system with event-driven software and parallel plate-handling devices, it becomes possible to run complicated assays with kinetic readouts in high-density microtiter plate formats for high-throughput screening. For the past 5 years, we have used the plate::vision and the plate::explorer to run screens and have generated more than 30 million data points. Their throughput, performance, and robustness have speeded up our drug discovery process greatly.
TCP Throughput Profiles Using Measurements over Dedicated Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata
Wide-area data transfers in high-performance computing infrastructures are increasingly being carried over dynamically provisioned dedicated network connections that provide high capacities with no competing traffic. We present extensive TCP throughput measurements and time traces over a suite of physical and emulated 10 Gbps connections with 0-366 ms round-trip times (RTTs). Contrary to the general expectation, they show significant statistical and temporal variations, in addition to the overall dependencies on the congestion control mechanism, buffer size, and the number of parallel streams. We analyze several throughput profiles that have highly desirable concave regions wherein the throughput decreases slowly with RTTs, inmore » stark contrast to the convex profiles predicted by various TCP analytical models. We present a generic throughput model that abstracts the ramp-up and sustainment phases of TCP flows, which provides insights into qualitative trends observed in measurements across TCP variants: (i) slow-start followed by well-sustained throughput leads to concave regions; (ii) large buffers and multiple parallel streams expand the concave regions in addition to improving the throughput; and (iii) stable throughput dynamics, indicated by a smoother Poincare map and smaller Lyapunov exponents, lead to wider concave regions. These measurements and analytical results together enable us to select a TCP variant and its parameters for a given connection to achieve high throughput with statistical guarantees.« less
High throughput toxicology programs, such as ToxCast and Tox21, have provided biological effects data for thousands of chemicals at multiple concentrations. Compared to traditional, whole-organism approaches, high throughput assays are rapid and cost-effective, yet they generall...
The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer ...
The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD
NASA Astrophysics Data System (ADS)
Cox, M. A.; Reed, R.; Mellado, B.
2015-01-01
After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.
[Current applications of high-throughput DNA sequencing technology in antibody drug research].
Yu, Xin; Liu, Qi-Gang; Wang, Ming-Rong
2012-03-01
Since the publication of a high-throughput DNA sequencing technology based on PCR reaction was carried out in oil emulsions in 2005, high-throughput DNA sequencing platforms have been evolved to a robust technology in sequencing genomes and diverse DNA libraries. Antibody libraries with vast numbers of members currently serve as a foundation of discovering novel antibody drugs, and high-throughput DNA sequencing technology makes it possible to rapidly identify functional antibody variants with desired properties. Herein we present a review of current applications of high-throughput DNA sequencing technology in the analysis of antibody library diversity, sequencing of CDR3 regions, identification of potent antibodies based on sequence frequency, discovery of functional genes, and combination with various display technologies, so as to provide an alternative approach of discovery and development of antibody drugs.
Lessons from high-throughput protein crystallization screening: 10 years of practical experience
JR, Luft; EH, Snell; GT, DeTitta
2011-01-01
Introduction X-ray crystallography provides the majority of our structural biological knowledge at a molecular level and in terms of pharmaceutical design is a valuable tool to accelerate discovery. It is the premier technique in the field, but its usefulness is significantly limited by the need to grow well-diffracting crystals. It is for this reason that high-throughput crystallization has become a key technology that has matured over the past 10 years through the field of structural genomics. Areas covered The authors describe their experiences in high-throughput crystallization screening in the context of structural genomics and the general biomedical community. They focus on the lessons learnt from the operation of a high-throughput crystallization screening laboratory, which to date has screened over 12,500 biological macromolecules. They also describe the approaches taken to maximize the success while minimizing the effort. Through this, the authors hope that the reader will gain an insight into the efficient design of a laboratory and protocols to accomplish high-throughput crystallization on a single-, multiuser-laboratory or industrial scale. Expert Opinion High-throughput crystallization screening is readily available but, despite the power of the crystallographic technique, getting crystals is still not a solved problem. High-throughput approaches can help when used skillfully; however, they still require human input in the detailed analysis and interpretation of results to be more successful. PMID:22646073
High-throughput screening based on label-free detection of small molecule microarrays
NASA Astrophysics Data System (ADS)
Zhu, Chenggang; Fei, Yiyan; Zhu, Xiangdong
2017-02-01
Based on small-molecule microarrays (SMMs) and oblique-incidence reflectivity difference (OI-RD) scanner, we have developed a novel high-throughput drug preliminary screening platform based on label-free monitoring of direct interactions between target proteins and immobilized small molecules. The screening platform is especially attractive for screening compounds against targets of unknown function and/or structure that are not compatible with functional assay development. In this screening platform, OI-RD scanner serves as a label-free detection instrument which is able to monitor about 15,000 biomolecular interactions in a single experiment without the need to label any biomolecule. Besides, SMMs serves as a novel format for high-throughput screening by immobilization of tens of thousands of different compounds on a single phenyl-isocyanate functionalized glass slide. Based on the high-throughput screening platform, we sequentially screened five target proteins (purified target proteins or cell lysate containing target protein) in high-throughput and label-free mode. We found hits for respective target protein and the inhibition effects for some hits were confirmed by following functional assays. Compared to traditional high-throughput screening assay, the novel high-throughput screening platform has many advantages, including minimal sample consumption, minimal distortion of interactions through label-free detection, multi-target screening analysis, which has a great potential to be a complementary screening platform in the field of drug discovery.
High-throughput analysis of yeast replicative aging using a microfluidic system
Jo, Myeong Chan; Liu, Wei; Gu, Liang; Dang, Weiwei; Qin, Lidong
2015-01-01
Saccharomyces cerevisiae has been an important model for studying the molecular mechanisms of aging in eukaryotic cells. However, the laborious and low-throughput methods of current yeast replicative lifespan assays limit their usefulness as a broad genetic screening platform for research on aging. We address this limitation by developing an efficient, high-throughput microfluidic single-cell analysis chip in combination with high-resolution time-lapse microscopy. This innovative design enables, to our knowledge for the first time, the determination of the yeast replicative lifespan in a high-throughput manner. Morphological and phenotypical changes during aging can also be monitored automatically with a much higher throughput than previous microfluidic designs. We demonstrate highly efficient trapping and retention of mother cells, determination of the replicative lifespan, and tracking of yeast cells throughout their entire lifespan. Using the high-resolution and large-scale data generated from the high-throughput yeast aging analysis (HYAA) chips, we investigated particular longevity-related changes in cell morphology and characteristics, including critical cell size, terminal morphology, and protein subcellular localization. In addition, because of the significantly improved retention rate of yeast mother cell, the HYAA-Chip was capable of demonstrating replicative lifespan extension by calorie restriction. PMID:26170317
pyGeno: A Python package for precision medicine and proteogenomics.
Daouda, Tariq; Perreault, Claude; Lemieux, Sébastien
2016-01-01
pyGeno is a Python package mainly intended for precision medicine applications that revolve around genomics and proteomics. It integrates reference sequences and annotations from Ensembl, genomic polymorphisms from the dbSNP database and data from next-gen sequencing into an easy to use, memory-efficient and fast framework, therefore allowing the user to easily explore subject-specific genomes and proteomes. Compared to a standalone program, pyGeno gives the user access to the complete expressivity of Python, a general programming language. Its range of application therefore encompasses both short scripts and large scale genome-wide studies.
pyGeno: A Python package for precision medicine and proteogenomics
Daouda, Tariq; Perreault, Claude; Lemieux, Sébastien
2016-01-01
pyGeno is a Python package mainly intended for precision medicine applications that revolve around genomics and proteomics. It integrates reference sequences and annotations from Ensembl, genomic polymorphisms from the dbSNP database and data from next-gen sequencing into an easy to use, memory-efficient and fast framework, therefore allowing the user to easily explore subject-specific genomes and proteomes. Compared to a standalone program, pyGeno gives the user access to the complete expressivity of Python, a general programming language. Its range of application therefore encompasses both short scripts and large scale genome-wide studies. PMID:27785359
Erickson, Heidi S
2012-09-28
The future of personalized medicine depends on the ability to efficiently and rapidly elucidate a reliable set of disease-specific molecular biomarkers. High-throughput molecular biomarker analysis methods have been developed to identify disease risk, diagnostic, prognostic, and therapeutic targets in human clinical samples. Currently, high throughput screening allows us to analyze thousands of markers from one sample or one marker from thousands of samples and will eventually allow us to analyze thousands of markers from thousands of samples. Unfortunately, the inherent nature of current high throughput methodologies, clinical specimens, and cost of analysis is often prohibitive for extensive high throughput biomarker analysis. This review summarizes the current state of high throughput biomarker screening of clinical specimens applicable to genetic epidemiology and longitudinal population-based studies with a focus on considerations related to biospecimens, laboratory techniques, and sample pooling. Copyright © 2012 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Continuous Compliance With Operating Limits-High Throughput Transfer Racks 9 Table 9 to Subpart EEEE of Part 63 Protection of Environment...—Continuous Compliance With Operating Limits—High Throughput Transfer Racks As stated in §§ 63.2378(a) and (b...
Accelerating the design of solar thermal fuel materials through high throughput simulations.
Liu, Yun; Grossman, Jeffrey C
2014-12-10
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.
Scafaro, Andrew P; Negrini, A Clarissa A; O'Leary, Brendan; Rashid, F Azzahra Ahmad; Hayes, Lucy; Fan, Yuzhen; Zhang, You; Chochois, Vincent; Badger, Murray R; Millar, A Harvey; Atkin, Owen K
2017-01-01
Mitochondrial respiration in the dark ( R dark ) is a critical plant physiological process, and hence a reliable, efficient and high-throughput method of measuring variation in rates of R dark is essential for agronomic and ecological studies. However, currently methods used to measure R dark in plant tissues are typically low throughput. We assessed a high-throughput automated fluorophore system of detecting multiple O 2 consumption rates. The fluorophore technique was compared with O 2 -electrodes, infrared gas analysers (IRGA), and membrane inlet mass spectrometry, to determine accuracy and speed of detecting respiratory fluxes. The high-throughput fluorophore system provided stable measurements of R dark in detached leaf and root tissues over many hours. High-throughput potential was evident in that the fluorophore system was 10 to 26-fold faster per sample measurement than other conventional methods. The versatility of the technique was evident in its enabling: (1) rapid screening of R dark in 138 genotypes of wheat; and, (2) quantification of rarely-assessed whole-plant R dark through dissection and simultaneous measurements of above- and below-ground organs. Variation in absolute R dark was observed between techniques, likely due to variation in sample conditions (i.e. liquid vs. gas-phase, open vs. closed systems), indicating that comparisons between studies using different measuring apparatus may not be feasible. However, the high-throughput protocol we present provided similar values of R dark to the most commonly used IRGA instrument currently employed by plant scientists. Together with the greater than tenfold increase in sample processing speed, we conclude that the high-throughput protocol enables reliable, stable and reproducible measurements of R dark on multiple samples simultaneously, irrespective of plant or tissue type.
Asati, Atul; Kachurina, Olga; Kachurin, Anatoly
2012-01-01
Considering importance of ganglioside antibodies as biomarkers in various immune-mediated neuropathies and neurological disorders, we developed a high throughput multiplexing tool for the assessment of gangliosides-specific antibodies based on Biolpex/Luminex platform. In this report, we demonstrate that the ganglioside high throughput multiplexing tool is robust, highly specific and demonstrating ∼100-fold higher concentration sensitivity for IgG detection than ELISA. In addition to the ganglioside-coated array, the high throughput multiplexing tool contains beads coated with influenza hemagglutinins derived from H1N1 A/Brisbane/59/07 and H1N1 A/California/07/09 strains. Influenza beads provided an added advantage of simultaneous detection of ganglioside- and influenza-specific antibodies, a capacity important for the assay of both infectious antigen-specific and autoimmune antibodies following vaccination or disease. Taken together, these results support the potential adoption of the ganglioside high throughput multiplexing tool for measuring ganglioside antibodies in various neuropathic and neurological disorders. PMID:22952605
High-throughput sample adaptive offset hardware architecture for high-efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin
2018-03-01
A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.
High throughput light absorber discovery, Part 1: An algorithm for automated tauc analysis
Suram, Santosh K.; Newhouse, Paul F.; Gregoire, John M.
2016-09-23
High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2O 3, Cu 2V 2O 7, and BiVOmore » 4. Here, the applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.« less
2015-01-01
High-throughput production of nanoparticles (NPs) with controlled quality is critical for their clinical translation into effective nanomedicines for diagnostics and therapeutics. Here we report a simple and versatile coaxial turbulent jet mixer that can synthesize a variety of NPs at high throughput up to 3 kg/d, while maintaining the advantages of homogeneity, reproducibility, and tunability that are normally accessible only in specialized microscale mixing devices. The device fabrication does not require specialized machining and is easy to operate. As one example, we show reproducible, high-throughput formulation of siRNA-polyelectrolyte polyplex NPs that exhibit effective gene knockdown but exhibit significant dependence on batch size when formulated using conventional methods. The coaxial turbulent jet mixer can accelerate the development of nanomedicines by providing a robust and versatile platform for preparation of NPs at throughputs suitable for in vivo studies, clinical trials, and industrial-scale production. PMID:24824296
Park, Gun Wook; Hwang, Heeyoun; Kim, Kwang Hoe; Lee, Ju Yeon; Lee, Hyun Kyoung; Park, Ji Yeong; Ji, Eun Sun; Park, Sung-Kyu Robin; Yates, John R; Kwon, Kyung-Hoon; Park, Young Mok; Lee, Hyoung-Joo; Paik, Young-Ki; Kim, Jin Young; Yoo, Jong Shin
2016-11-04
In the Chromosome-Centric Human Proteome Project (C-HPP), false-positive identification by peptide spectrum matches (PSMs) after database searches is a major issue for proteogenomic studies using liquid-chromatography and mass-spectrometry-based large proteomic profiling. Here we developed a simple strategy for protein identification, with a controlled false discovery rate (FDR) at the protein level, using an integrated proteomic pipeline (IPP) that consists of four engrailed steps as follows. First, using three different search engines, SEQUEST, MASCOT, and MS-GF+, individual proteomic searches were performed against the neXtProt database. Second, the search results from the PSMs were combined using statistical evaluation tools including DTASelect and Percolator. Third, the peptide search scores were converted into E-scores normalized using an in-house program. Last, ProteinInferencer was used to filter the proteins containing two or more peptides with a controlled FDR of 1.0% at the protein level. Finally, we compared the performance of the IPP to a conventional proteomic pipeline (CPP) for protein identification using a controlled FDR of <1% at the protein level. Using the IPP, a total of 5756 proteins (vs 4453 using the CPP) including 477 alternative splicing variants (vs 182 using the CPP) were identified from human hippocampal tissue. In addition, a total of 10 missing proteins (vs 7 using the CPP) were identified with two or more unique peptides, and their tryptic peptides were validated using MS/MS spectral pattern from a repository database or their corresponding synthetic peptides. This study shows that the IPP effectively improved the identification of proteins, including alternative splicing variants and missing proteins, in human hippocampal tissues for the C-HPP. All RAW files used in this study were deposited in ProteomeXchange (PXD000395).
Grossmann, Jonas; Fernández, Helena; Chaubey, Pururawa M; Valdés, Ana E; Gagliardini, Valeria; Cañal, María J; Russo, Giancarlo; Grossniklaus, Ueli
2017-01-01
Performing proteomic studies on non-model organisms with little or no genomic information is still difficult. However, many specific processes and biochemical pathways occur only in species that are poorly characterized at the genomic level. For example, many plants can reproduce both sexually and asexually, the first one allowing the generation of new genotypes and the latter their fixation. Thus, both modes of reproduction are of great agronomic value. However, the molecular basis of asexual reproduction is not well understood in any plant. In ferns, it combines the production of unreduced spores (diplospory) and the formation of sporophytes from somatic cells (apogamy). To set the basis to study these processes, we performed transcriptomics by next-generation sequencing (NGS) and shotgun proteomics by tandem mass spectrometry in the apogamous fern D. affinis ssp. affinis . For protein identification we used the public viridiplantae database (VPDB) to identify orthologous proteins from other plant species and new transcriptomics data to generate a "species-specific transcriptome database" (SSTDB). In total 1,397 protein clusters with 5,865 unique peptide sequences were identified (13 decoy proteins out of 1,410, protFDR 0.93% on protein cluster level). We show that using the SSTDB for protein identification increases the number of identified peptides almost four times compared to using only the publically available VPDB. We identified homologs of proteins involved in reproduction of higher plants, including proteins with a potential role in apogamy. With the increasing availability of genomic data from non-model species, similar proteogenomics approaches will improve the sensitivity in protein identification for species only distantly related to models.
Li, Fumin; Wang, Jun; Jenkins, Rand
2016-05-01
There is an ever-increasing demand for high-throughput LC-MS/MS bioanalytical assays to support drug discovery and development. Matrix effects of sofosbuvir (protonated) and paclitaxel (sodiated) were thoroughly evaluated using high-throughput chromatography (defined as having a run time ≤1 min) under 14 elution conditions with extracts from protein precipitation, liquid-liquid extraction and solid-phase extraction. A slight separation, in terms of retention time, between underlying matrix components and sofosbuvir/paclitaxel can greatly alleviate matrix effects. High-throughput chromatography, with proper optimization, can provide rapid and effective chromatographic separation under 1 min to alleviate matrix effects and enhance assay ruggedness for regulated bioanalysis.
High throughput system for magnetic manipulation of cells, polymers, and biomaterials
Spero, Richard Chasen; Vicci, Leandra; Cribb, Jeremy; Bober, David; Swaminathan, Vinay; O’Brien, E. Timothy; Rogers, Stephen L.; Superfine, R.
2008-01-01
In the past decade, high throughput screening (HTS) has changed the way biochemical assays are performed, but manipulation and mechanical measurement of micro- and nanoscale systems have not benefited from this trend. Techniques using microbeads (particles ∼0.1–10 μm) show promise for enabling high throughput mechanical measurements of microscopic systems. We demonstrate instrumentation to magnetically drive microbeads in a biocompatible, multiwell magnetic force system. It is based on commercial HTS standards and is scalable to 96 wells. Cells can be cultured in this magnetic high throughput system (MHTS). The MHTS can apply independently controlled forces to 16 specimen wells. Force calibrations demonstrate forces in excess of 1 nN, predicted force saturation as a function of pole material, and powerlaw dependence of F∼r−2.7±0.1. We employ this system to measure the stiffness of SR2+ Drosophila cells. MHTS technology is a key step toward a high throughput screening system for micro- and nanoscale biophysical experiments. PMID:19044357
Kračun, Stjepan Krešimir; Fangel, Jonatan Ulrik; Rydahl, Maja Gro; Pedersen, Henriette Lodberg; Vidal-Melgosa, Silvia; Willats, William George Tycho
2017-01-01
Cell walls are an important feature of plant cells and a major component of the plant glycome. They have both structural and physiological functions and are critical for plant growth and development. The diversity and complexity of these structures demand advanced high-throughput techniques to answer questions about their structure, functions and roles in both fundamental and applied scientific fields. Microarray technology provides both the high-throughput and the feasibility aspects required to meet that demand. In this chapter, some of the most recent microarray-based techniques relating to plant cell walls are described together with an overview of related contemporary techniques applied to carbohydrate microarrays and their general potential in glycoscience. A detailed experimental procedure for high-throughput mapping of plant cell wall glycans using the comprehensive microarray polymer profiling (CoMPP) technique is included in the chapter and provides a good example of both the robust and high-throughput nature of microarrays as well as their applicability to plant glycomics.
Identification of functional modules using network topology and high-throughput data.
Ulitsky, Igor; Shamir, Ron
2007-01-26
With the advent of systems biology, biological knowledge is often represented today by networks. These include regulatory and metabolic networks, protein-protein interaction networks, and many others. At the same time, high-throughput genomics and proteomics techniques generate very large data sets, which require sophisticated computational analysis. Usually, separate and different analysis methodologies are applied to each of the two data types. An integrated investigation of network and high-throughput information together can improve the quality of the analysis by accounting simultaneously for topological network properties alongside intrinsic features of the high-throughput data. We describe a novel algorithmic framework for this challenge. We first transform the high-throughput data into similarity values, (e.g., by computing pairwise similarity of gene expression patterns from microarray data). Then, given a network of genes or proteins and similarity values between some of them, we seek connected sub-networks (or modules) that manifest high similarity. We develop algorithms for this problem and evaluate their performance on the osmotic shock response network in S. cerevisiae and on the human cell cycle network. We demonstrate that focused, biologically meaningful and relevant functional modules are obtained. In comparison with extant algorithms, our approach has higher sensitivity and higher specificity. We have demonstrated that our method can accurately identify functional modules. Hence, it carries the promise to be highly useful in analysis of high throughput data.
Stepping into the omics era: Opportunities and challenges for biomaterials science and engineering.
Groen, Nathalie; Guvendiren, Murat; Rabitz, Herschel; Welsh, William J; Kohn, Joachim; de Boer, Jan
2016-04-01
The research paradigm in biomaterials science and engineering is evolving from using low-throughput and iterative experimental designs towards high-throughput experimental designs for materials optimization and the evaluation of materials properties. Computational science plays an important role in this transition. With the emergence of the omics approach in the biomaterials field, referred to as materiomics, high-throughput approaches hold the promise of tackling the complexity of materials and understanding correlations between material properties and their effects on complex biological systems. The intrinsic complexity of biological systems is an important factor that is often oversimplified when characterizing biological responses to materials and establishing property-activity relationships. Indeed, in vitro tests designed to predict in vivo performance of a given biomaterial are largely lacking as we are not able to capture the biological complexity of whole tissues in an in vitro model. In this opinion paper, we explain how we reached our opinion that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. In this opinion paper, we postulate that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. Copyright © 2016. Published by Elsevier Ltd.
Chan, Leo Li-Ying; Smith, Tim; Kumph, Kendra A; Kuksin, Dmitry; Kessel, Sarah; Déry, Olivier; Cribbes, Scott; Lai, Ning; Qiu, Jean
2016-10-01
To ensure cell-based assays are performed properly, both cell concentration and viability have to be determined so that the data can be normalized to generate meaningful and comparable results. Cell-based assays performed in immuno-oncology, toxicology, or bioprocessing research often require measuring of multiple samples and conditions, thus the current automated cell counter that uses single disposable counting slides is not practical for high-throughput screening assays. In the recent years, a plate-based image cytometry system has been developed for high-throughput biomolecular screening assays. In this work, we demonstrate a high-throughput AO/PI-based cell concentration and viability method using the Celigo image cytometer. First, we validate the method by comparing directly to Cellometer automated cell counter. Next, cell concentration dynamic range, viability dynamic range, and consistency are determined. The high-throughput AO/PI method described here allows for 96-well to 384-well plate samples to be analyzed in less than 7 min, which greatly reduces the time required for the single sample-based automated cell counter. In addition, this method can improve the efficiency for high-throughput screening assays, where multiple cell counts and viability measurements are needed prior to performing assays such as flow cytometry, ELISA, or simply plating cells for cell culture.
Accelerating the Design of Solar Thermal Fuel Materials through High Throughput Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Grossman, JC
2014-12-01
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastablemore » structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.« less
40 CFR Table 3 to Subpart Eeee of... - Operating Limits-High Throughput Transfer Racks
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Operating Limits-High Throughput Transfer Racks 3 Table 3 to Subpart EEEE of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION... Throughput Transfer Racks As stated in § 63.2346(e), you must comply with the operating limits for existing...
Dawes, Timothy D; Turincio, Rebecca; Jones, Steven W; Rodriguez, Richard A; Gadiagellan, Dhireshan; Thana, Peter; Clark, Kevin R; Gustafson, Amy E; Orren, Linda; Liimatta, Marya; Gross, Daniel P; Maurer, Till; Beresini, Maureen H
2016-02-01
Acoustic droplet ejection (ADE) as a means of transferring library compounds has had a dramatic impact on the way in which high-throughput screening campaigns are conducted in many laboratories. Two Labcyte Echo ADE liquid handlers form the core of the compound transfer operation in our 1536-well based ultra-high-throughput screening (uHTS) system. Use of these instruments has promoted flexibility in compound formatting in addition to minimizing waste and eliminating compound carryover. We describe the use of ADE for the generation of assay-ready plates for primary screening as well as for follow-up dose-response evaluations. Custom software has enabled us to harness the information generated by the ADE instrumentation. Compound transfer via ADE also contributes to the screening process outside of the uHTS system. A second fully automated ADE-based system has been used to augment the capacity of the uHTS system as well as to permit efficient use of previously picked compound aliquots for secondary assay evaluations. Essential to the utility of ADE in the high-throughput screening process is the high quality of the resulting data. Examples of data generated at various stages of high-throughput screening campaigns are provided. Advantages and disadvantages of the use of ADE in high-throughput screening are discussed. © 2015 Society for Laboratory Automation and Screening.
An Automated High-Throughput System to Fractionate Plant Natural Products for Drug Discovery
Tu, Ying; Jeffries, Cynthia; Ruan, Hong; Nelson, Cynthia; Smithson, David; Shelat, Anang A.; Brown, Kristin M.; Li, Xing-Cong; Hester, John P.; Smillie, Troy; Khan, Ikhlas A.; Walker, Larry; Guy, Kip; Yan, Bing
2010-01-01
The development of an automated, high-throughput fractionation procedure to prepare and analyze natural product libraries for drug discovery screening is described. Natural products obtained from plant materials worldwide were extracted and first prefractionated on polyamide solid-phase extraction cartridges to remove polyphenols, followed by high-throughput automated fractionation, drying, weighing, and reformatting for screening and storage. The analysis of fractions with UPLC coupled with MS, PDA and ELSD detectors provides information that facilitates characterization of compounds in active fractions. Screening of a portion of fractions yielded multiple assay-specific hits in several high-throughput cellular screening assays. This procedure modernizes the traditional natural product fractionation paradigm by seamlessly integrating automation, informatics, and multimodal analytical interrogation capabilities. PMID:20232897
High-throughput measurements of the optical redox ratio using a commercial microplate reader.
Cannon, Taylor M; Shah, Amy T; Walsh, Alex J; Skala, Melissa C
2015-01-01
There is a need for accurate, high-throughput, functional measures to gauge the efficacy of potential drugs in living cells. As an early marker of drug response in cells, cellular metabolism provides an attractive platform for high-throughput drug testing. Optical techniques can noninvasively monitor NADH and FAD, two autofluorescent metabolic coenzymes. The autofluorescent redox ratio, defined as the autofluorescence intensity of NADH divided by that of FAD, quantifies relative rates of cellular glycolysis and oxidative phosphorylation. However, current microscopy methods for redox ratio quantification are time-intensive and low-throughput, limiting their practicality in drug screening. Alternatively, high-throughput commercial microplate readers quickly measure fluorescence intensities for hundreds of wells. This study found that a commercial microplate reader can differentiate the receptor status of breast cancer cell lines (p < 0.05) based on redox ratio measurements without extrinsic contrast agents. Furthermore, microplate reader redox ratio measurements resolve response (p < 0.05) and lack of response (p > 0.05) in cell lines that are responsive and nonresponsive, respectively, to the breast cancer drug trastuzumab. These studies indicate that the microplate readers can be used to measure the redox ratio in a high-throughput manner and are sensitive enough to detect differences in cellular metabolism that are consistent with microscopy results.
A high-throughput in vitro ring assay for vasoactivity using magnetic 3D bioprinting
Tseng, Hubert; Gage, Jacob A.; Haisler, William L.; Neeley, Shane K.; Shen, Tsaiwei; Hebel, Chris; Barthlow, Herbert G.; Wagoner, Matthew; Souza, Glauco R.
2016-01-01
Vasoactive liabilities are typically assayed using wire myography, which is limited by its high cost and low throughput. To meet the demand for higher throughput in vitro alternatives, this study introduces a magnetic 3D bioprinting-based vasoactivity assay. The principle behind this assay is the magnetic printing of vascular smooth muscle cells into 3D rings that functionally represent blood vessel segments, whose contraction can be altered by vasodilators and vasoconstrictors. A cost-effective imaging modality employing a mobile device is used to capture contraction with high throughput. The goal of this study was to validate ring contraction as a measure of vasoactivity, using a small panel of known vasoactive drugs. In vitro responses of the rings matched outcomes predicted by in vivo pharmacology, and were supported by immunohistochemistry. Altogether, this ring assay robustly models vasoactivity, which could meet the need for higher throughput in vitro alternatives. PMID:27477945
An image analysis toolbox for high-throughput C. elegans assays
Wählby, Carolina; Kamentsky, Lee; Liu, Zihan H.; Riklin-Raviv, Tammy; Conery, Annie L.; O’Rourke, Eyleen J.; Sokolnicki, Katherine L.; Visvikis, Orane; Ljosa, Vebjorn; Irazoqui, Javier E.; Golland, Polina; Ruvkun, Gary; Ausubel, Frederick M.; Carpenter, Anne E.
2012-01-01
We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available via the open-source CellProfiler project and enables objective scoring of whole-animal high-throughput image-based assays of C. elegans for the study of diverse biological pathways relevant to human disease. PMID:22522656
High-throughput, image-based screening of pooled genetic variant libraries
Emanuel, George; Moffitt, Jeffrey R.; Zhuang, Xiaowei
2018-01-01
Image-based, high-throughput screening of genetic perturbations will advance both biology and biotechnology. We report a high-throughput screening method that allows diverse genotypes and corresponding phenotypes to be imaged in numerous individual cells. We achieve genotyping by introducing barcoded genetic variants into cells and using massively multiplexed FISH to measure the barcodes. We demonstrated this method by screening mutants of the fluorescent protein YFAST, yielding brighter and more photostable YFAST variants. PMID:29083401
Experimental Design for Combinatorial and High Throughput Materials Development
NASA Astrophysics Data System (ADS)
Cawse, James N.
2002-12-01
In the past decade, combinatorial and high throughput experimental methods have revolutionized the pharmaceutical industry, allowing researchers to conduct more experiments in a week than was previously possible in a year. Now high throughput experimentation is rapidly spreading from its origins in the pharmaceutical world to larger industrial research establishments such as GE and DuPont, and even to smaller companies and universities. Consequently, researchers need to know the kinds of problems, desired outcomes, and appropriate patterns for these new strategies. Editor James Cawse's far-reaching study identifies and applies, with specific examples, these important new principles and techniques. Experimental Design for Combinatorial and High Throughput Materials Development progresses from methods that are now standard, such as gradient arrays, to mathematical developments that are breaking new ground. The former will be particularly useful to researchers entering the field, while the latter should inspire and challenge advanced practitioners. The book's contents are contributed by leading researchers in their respective fields. Chapters include: -High Throughput Synthetic Approaches for the Investigation of Inorganic Phase Space -Combinatorial Mapping of Polymer Blends Phase Behavior -Split-Plot Designs -Artificial Neural Networks in Catalyst Development -The Monte Carlo Approach to Library Design and Redesign This book also contains over 200 useful charts and drawings. Industrial chemists, chemical engineers, materials scientists, and physicists working in combinatorial and high throughput chemistry will find James Cawse's study to be an invaluable resource.
Deciphering the genomic targets of alkylating polyamide conjugates using high-throughput sequencing
Chandran, Anandhakumar; Syed, Junetha; Taylor, Rhys D.; Kashiwazaki, Gengo; Sato, Shinsuke; Hashiya, Kaori; Bando, Toshikazu; Sugiyama, Hiroshi
2016-01-01
Chemically engineered small molecules targeting specific genomic sequences play an important role in drug development research. Pyrrole-imidazole polyamides (PIPs) are a group of molecules that can bind to the DNA minor-groove and can be engineered to target specific sequences. Their biological effects rely primarily on their selective DNA binding. However, the binding mechanism of PIPs at the chromatinized genome level is poorly understood. Herein, we report a method using high-throughput sequencing to identify the DNA-alkylating sites of PIP-indole-seco-CBI conjugates. High-throughput sequencing analysis of conjugate 2 showed highly similar DNA-alkylating sites on synthetic oligos (histone-free DNA) and on human genomes (chromatinized DNA context). To our knowledge, this is the first report identifying alkylation sites across genomic DNA by alkylating PIP conjugates using high-throughput sequencing. PMID:27098039
Development of rapid and sensitive high throughput pharmacologic assays for marine phycotoxins.
Van Dolah, F M; Finley, E L; Haynes, B L; Doucette, G J; Moeller, P D; Ramsdell, J S
1994-01-01
The lack of rapid, high throughput assays is a major obstacle to many aspects of research on marine phycotoxins. Here we describe the application of microplate scintillation technology to develop high throughput assays for several classes of marine phycotoxin based on their differential pharmacologic actions. High throughput "drug discovery" format microplate receptor binding assays developed for brevetoxins/ciguatoxins and for domoic acid are described. Analysis for brevetoxins/ciguatoxins is carried out by binding competition with [3H] PbTx-3 for site 5 on the voltage dependent sodium channel in rat brain synaptosomes. Analysis of domoic acid is based on binding competition with [3H] kainic acid for the kainate/quisqualate glutamate receptor using frog brain synaptosomes. In addition, a high throughput microplate 45Ca flux assay for determination of maitotoxins is described. These microplate assays can be completed within 3 hours, have sensitivities of less than 1 ng, and can analyze dozens of samples simultaneously. The assays have been demonstrated to be useful for assessing algal toxicity and for assay-guided purification of toxins, and are applicable to the detection of biotoxins in seafood.
High-Throughput/High-Content Screening Assays with Engineered Nanomaterials in ToxCast
High-throughput and high-content screens are attractive approaches for prioritizing nanomaterial hazards and informing targeted testing due to the impracticality of using traditional toxicological testing on the large numbers and varieties of nanomaterials. The ToxCast program a...
Moore, Priscilla A; Kery, Vladimir
2009-01-01
High-throughput protein purification is a complex, multi-step process. There are several technical challenges in the course of this process that are not experienced when purifying a single protein. Among the most challenging are the high-throughput protein concentration and buffer exchange, which are not only labor-intensive but can also result in significant losses of purified proteins. We describe two methods of high-throughput protein concentration and buffer exchange: one using ammonium sulfate precipitation and one using micro-concentrating devices based on membrane ultrafiltration. We evaluated the efficiency of both methods on a set of 18 randomly selected purified proteins from Shewanella oneidensis. While both methods provide similar yield and efficiency, the ammonium sulfate precipitation is much less labor intensive and time consuming than the ultrafiltration.
NASA Astrophysics Data System (ADS)
Mok, Aaron T. Y.; Lee, Kelvin C. M.; Wong, Kenneth K. Y.; Tsia, Kevin K.
2018-02-01
Biophysical properties of cells could complement and correlate biochemical markers to characterize a multitude of cellular states. Changes in cell size, dry mass and subcellular morphology, for instance, are relevant to cell-cycle progression which is prevalently evaluated by DNA-targeted fluorescence measurements. Quantitative-phase microscopy (QPM) is among the effective biophysical phenotyping tools that can quantify cell sizes and sub-cellular dry mass density distribution of single cells at high spatial resolution. However, limited camera frame rate and thus imaging throughput makes QPM incompatible with high-throughput flow cytometry - a gold standard in multiparametric cell-based assay. Here we present a high-throughput approach for label-free analysis of cell cycle based on quantitative-phase time-stretch imaging flow cytometry at a throughput of > 10,000 cells/s. Our time-stretch QPM system enables sub-cellular resolution even at high speed, allowing us to extract a multitude (at least 24) of single-cell biophysical phenotypes (from both amplitude and phase images). Those phenotypes can be combined to track cell-cycle progression based on a t-distributed stochastic neighbor embedding (t-SNE) algorithm. Using multivariate analysis of variance (MANOVA) discriminant analysis, cell-cycle phases can also be predicted label-free with high accuracy at >90% in G1 and G2 phase, and >80% in S phase. We anticipate that high throughput label-free cell cycle characterization could open new approaches for large-scale single-cell analysis, bringing new mechanistic insights into complex biological processes including diseases pathogenesis.
Repurposing a Benchtop Centrifuge for High-Throughput Single-Molecule Force Spectroscopy.
Yang, Darren; Wong, Wesley P
2018-01-01
We present high-throughput single-molecule manipulation using a benchtop centrifuge, overcoming limitations common in other single-molecule approaches such as high cost, low throughput, technical difficulty, and strict infrastructure requirements. An inexpensive and compact Centrifuge Force Microscope (CFM) adapted to a commercial centrifuge enables use by nonspecialists, and integration with DNA nanoswitches facilitates both reliable measurements and repeated molecular interrogation. Here, we provide detailed protocols for constructing the CFM, creating DNA nanoswitch samples, and carrying out single-molecule force measurements.
High throughput single cell counting in droplet-based microfluidics.
Lu, Heng; Caen, Ouriel; Vrignon, Jeremy; Zonta, Eleonora; El Harrak, Zakaria; Nizard, Philippe; Baret, Jean-Christophe; Taly, Valérie
2017-05-02
Droplet-based microfluidics is extensively and increasingly used for high-throughput single-cell studies. However, the accuracy of the cell counting method directly impacts the robustness of such studies. We describe here a simple and precise method to accurately count a large number of adherent and non-adherent human cells as well as bacteria. Our microfluidic hemocytometer provides statistically relevant data on large populations of cells at a high-throughput, used to characterize cell encapsulation and cell viability during incubation in droplets.
2016-12-01
AWARD NUMBER: W81XWH-13-1-0371 TITLE: High-Throughput Sequencing of Germline and Tumor From Men with Early- Onset Metastatic Prostate Cancer...DATES COVERED 30 Sep 2013 - 29 Sep 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER High-Throughput Sequencing of Germline and Tumor From Men with...presenting with metastatic prostate cancer at a young age (before age 60 years). Whole exome sequencing identified a panel of germline variants that have
Pjevac, Petra; Meier, Dimitri V.; Markert, Stephanie; Hentschker, Christian; Schweder, Thomas; Becher, Dörte; Gruber-Vodicka, Harald R.; Richter, Michael; Bach, Wolfgang; Amann, Rudolf; Meyerdierks, Anke
2018-01-01
At hydrothermal vent sites, chimneys consisting of sulfides, sulfates, and oxides are formed upon contact of reduced hydrothermal fluids with oxygenated seawater. The walls and surfaces of these chimneys are an important habitat for vent-associated microorganisms. We used community proteogenomics to investigate and compare the composition, metabolic potential and relative in situ protein abundance of microbial communities colonizing two actively venting hydrothermal chimneys from the Manus Basin back-arc spreading center (Papua New Guinea). We identified overlaps in the in situ functional profiles of both chimneys, despite differences in microbial community composition and venting regime. Carbon fixation on both chimneys seems to have been primarily mediated through the reverse tricarboxylic acid cycle and fueled by sulfur-oxidation, while the abundant metabolic potential for hydrogen oxidation and carbon fixation via the Calvin–Benson–Bassham cycle was hardly utilized. Notably, the highly diverse microbial community colonizing the analyzed black smoker chimney had a highly redundant metabolic potential. In contrast, the considerably less diverse community colonizing the diffusely venting chimney displayed a higher metabolic versatility. An increased diversity on the phylogenetic level is thus not directly linked to an increased metabolic diversity in microbial communities that colonize hydrothermal chimneys. PMID:29696004
High-throughput sequencing methods to study neuronal RNA-protein interactions.
Ule, Jernej
2009-12-01
UV-cross-linking and RNase protection, combined with high-throughput sequencing, have provided global maps of RNA sites bound by individual proteins or ribosomes. Using a stringent purification protocol, UV-CLIP (UV-cross-linking and immunoprecipitation) was able to identify intronic and exonic sites bound by splicing regulators in mouse brain tissue. Ribosome profiling has been used to quantify ribosome density on budding yeast mRNAs under different environmental conditions. Post-transcriptional regulation in neurons requires high spatial and temporal precision, as is evident from the role of localized translational control in synaptic plasticity. It remains to be seen if the high-throughput methods can be applied quantitatively to study the dynamics of RNP (ribonucleoprotein) remodelling in specific neuronal populations during the neurodegenerative process. It is certain, however, that applications of new biochemical techniques followed by high-throughput sequencing will continue to provide important insights into the mechanisms of neuronal post-transcriptional regulation.
High-throughput and high-content screens are attractive approaches for prioritizing nanomaterial hazards and informing targeted testing due to the impracticality of using traditional toxicological testing on the large numbers and varieties of nanomaterials. The ToxCast program a...
Draveling, C; Ren, L; Haney, P; Zeisse, D; Qoronfleh, M W
2001-07-01
The revolution in genomics and proteomics is having a profound impact on drug discovery. Today's protein scientist demands a faster, easier, more reliable way to purify proteins. A high capacity, high-throughput new technology has been developed in Perbio Sciences for affinity protein purification. This technology utilizes selected chromatography media that are dehydrated to form uniform aggregates. The SwellGel aggregates will instantly rehydrate upon addition of the protein sample, allowing purification and direct performance of multiple assays in a variety of formats. SwellGel technology has greater stability and is easier to handle than standard wet chromatography resins. The microplate format of this technology provides high-capacity, high-throughput features, recovering milligram quantities of protein suitable for high-throughput screening or biophysical/structural studies. Data will be presented applying SwellGel technology to recombinant 6x His-tagged protein and glutathione-S-transferase (GST) fusion protein purification. Copyright 2001 Academic Press.
NASA Astrophysics Data System (ADS)
Mondal, Sudip; Hegarty, Evan; Martin, Chris; Gökçe, Sertan Kutal; Ghorashian, Navid; Ben-Yakar, Adela
2016-10-01
Next generation drug screening could benefit greatly from in vivo studies, using small animal models such as Caenorhabditis elegans for hit identification and lead optimization. Current in vivo assays can operate either at low throughput with high resolution or with low resolution at high throughput. To enable both high-throughput and high-resolution imaging of C. elegans, we developed an automated microfluidic platform. This platform can image 15 z-stacks of ~4,000 C. elegans from 96 different populations using a large-scale chip with a micron resolution in 16 min. Using this platform, we screened ~100,000 animals of the poly-glutamine aggregation model on 25 chips. We tested the efficacy of ~1,000 FDA-approved drugs in improving the aggregation phenotype of the model and identified four confirmed hits. This robust platform now enables high-content screening of various C. elegans disease models at the speed and cost of in vitro cell-based assays.
The ToxCast Dashboard helps users examine high-throughput assay data to inform chemical safety decisions. To date, it has data on over 9,000 chemicals and information from more than 1,000 high-throughput assay endpoint components.
The ToxCast Dashboard helps users examine high-throughput assay data to inform chemical safety decisions. To date, it has data on over 9,000 chemicals and information from more than 1,000 high-throughput assay endpoint components.
Yang, Wanneng; Guo, Zilong; Huang, Chenglong; Duan, Lingfeng; Chen, Guoxing; Jiang, Ni; Fang, Wei; Feng, Hui; Xie, Weibo; Lian, Xingming; Wang, Gongwei; Luo, Qingming; Zhang, Qifa; Liu, Qian; Xiong, Lizhong
2014-01-01
Even as the study of plant genomics rapidly develops through the use of high-throughput sequencing techniques, traditional plant phenotyping lags far behind. Here we develop a high-throughput rice phenotyping facility (HRPF) to monitor 13 traditional agronomic traits and 2 newly defined traits during the rice growth period. Using genome-wide association studies (GWAS) of the 15 traits, we identify 141 associated loci, 25 of which contain known genes such as the Green Revolution semi-dwarf gene, SD1. Based on a performance evaluation of the HRPF and GWAS results, we demonstrate that high-throughput phenotyping has the potential to replace traditional phenotyping techniques and can provide valuable gene identification information. The combination of the multifunctional phenotyping tools HRPF and GWAS provides deep insights into the genetic architecture of important traits. PMID:25295980
A high-quality annotated transcriptome of swine peripheral blood
USDA-ARS?s Scientific Manuscript database
Background: High throughput gene expression profiling assays of peripheral blood are widely used in biomedicine, as well as in animal genetics and physiology research. Accurate, comprehensive, and precise interpretation of such high throughput assays relies on well-characterized reference genomes an...
Polonchuk, Liudmila
2014-01-01
Patch-clamping is a powerful technique for investigating the ion channel function and regulation. However, its low throughput hampered profiling of large compound series in early drug development. Fortunately, automation has revolutionized the area of experimental electrophysiology over the past decade. Whereas the first automated patch-clamp instruments using the planar patch-clamp technology demonstrated rather a moderate throughput, few second-generation automated platforms recently launched by various companies have significantly increased ability to form a high number of high-resistance seals. Among them is SyncroPatch(®) 96 (Nanion Technologies GmbH, Munich, Germany), a fully automated giga-seal patch-clamp system with the highest throughput on the market. By recording from up to 96 cells simultaneously, the SyncroPatch(®) 96 allows to substantially increase throughput without compromising data quality. This chapter describes features of the innovative automated electrophysiology system and protocols used for a successful transfer of the established hERG assay to this high-throughput automated platform.
HIGH THROUGHPUT ASSESSMENTS OF CONVENTIONAL AND ALTERNATIVE COMPOUNDS
High throughput approaches for quantifying chemical hazard, exposure, and sustainability have the potential to dramatically impact the pace and nature of risk assessments. Integrated evaluation strategies developed at the US EPA incorporate inherency,bioactivity,bioavailability, ...
GiNA, an efficient and high-throughput software for horticultural phenotyping
USDA-ARS?s Scientific Manuscript database
Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed,...
High-throughput quantification of hydroxyproline for determination of collagen.
Hofman, Kathleen; Hall, Bronwyn; Cleaver, Helen; Marshall, Susan
2011-10-15
An accurate and high-throughput assay for collagen is essential for collagen research and development of collagen products. Hydroxyproline is routinely assayed to provide a measurement for collagen quantification. The time required for sample preparation using acid hydrolysis and neutralization prior to assay is what limits the current method for determining hydroxyproline. This work describes the conditions of alkali hydrolysis that, when combined with the colorimetric assay defined by Woessner, provide a high-throughput, accurate method for the measurement of hydroxyproline. Copyright © 2011 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wall, Andrew J.; Capo, Rosemary C.; Stewart, Brian W.
2016-09-22
This technical report presents the details of the Sr column configuration and the high-throughput Sr separation protocol. Data showing the performance of the method as well as the best practices for optimizing Sr isotope analysis by MC-ICP-MS is presented. Lastly, this report offers tools for data handling and data reduction of Sr isotope results from the Thermo Scientific Neptune software to assist in data quality assurance, which help avoid issues of data glut associated with high sample throughput rapid analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakala, Jacqueline Alexandra
2016-11-22
This technical report presents the details of the Sr column configuration and the high-throughput Sr separation protocol. Data showing the performance of the method as well as the best practices for optimizing Sr isotope analysis by MC-ICP-MS is presented. Lastly, this report offers tools for data handling and data reduction of Sr isotope results from the Thermo Scientific Neptune software to assist in data quality assurance, which help avoid issues of data glut associated with high sample throughput rapid analysis.
A Memory Efficient Network Encryption Scheme
NASA Astrophysics Data System (ADS)
El-Fotouh, Mohamed Abo; Diepold, Klaus
In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.
HTP-NLP: A New NLP System for High Throughput Phenotyping.
Schlegel, Daniel R; Crowner, Chris; Lehoullier, Frank; Elkin, Peter L
2017-01-01
Secondary use of clinical data for research requires a method to quickly process the data so that researchers can quickly extract cohorts. We present two advances in the High Throughput Phenotyping NLP system which support the aim of truly high throughput processing of clinical data, inspired by a characterization of the linguistic properties of such data. Semantic indexing to store and generalize partially-processed results and the use of compositional expressions for ungrammatical text are discussed, along with a set of initial timing results for the system.
NASA Astrophysics Data System (ADS)
Kudoh, Eisuke; Ito, Haruki; Wang, Zhisen; Adachi, Fumiyuki
In mobile communication systems, high speed packet data services are demanded. In the high speed data transmission, throughput degrades severely due to severe inter-path interference (IPI). Recently, we proposed a random transmit power control (TPC) to increase the uplink throughput of DS-CDMA packet mobile communications. In this paper, we apply IPI cancellation in addition to the random TPC. We derive the numerical expression of the received signal-to-interference plus noise power ratio (SINR) and introduce IPI cancellation factor. We also derive the numerical expression of system throughput when IPI is cancelled ideally to compare with the Monte Carlo numerically evaluated system throughput. Then we evaluate, by Monte-Carlo numerical computation method, the combined effect of random TPC and IPI cancellation on the uplink throughput of DS-CDMA packet mobile communications.
Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)
High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...
Solar fuels photoanode materials discovery by integrating high-throughput theory and experiment
Yan, Qimin; Yu, Jie; Suram, Santosh K.; ...
2017-03-06
The limited number of known low-band-gap photoelectrocatalytic materials poses a significant challenge for the generation of chemical fuels from sunlight. Here, using high-throughput ab initio theory with experiments in an integrated workflow, we find eight ternary vanadate oxide photoanodes in the target band-gap range (1.2-2.8 eV). Detailed analysis of these vanadate compounds reveals the key role of VO 4 structural motifs and electronic band-edge character in efficient photoanodes, initiating a genome for such materials and paving the way for a broadly applicable high-throughput-discovery and materials-by-design feedback loop. Considerably expanding the number of known photoelectrocatalysts for water oxidation, our study establishesmore » ternary metal vanadates as a prolific class of photoanodematerials for generation of chemical fuels from sunlight and demonstrates our high-throughput theory-experiment pipeline as a prolific approach to materials discovery.« less
Microfluidics for cell-based high throughput screening platforms - A review.
Du, Guansheng; Fang, Qun; den Toonder, Jaap M J
2016-01-15
In the last decades, the basic techniques of microfluidics for the study of cells such as cell culture, cell separation, and cell lysis, have been well developed. Based on cell handling techniques, microfluidics has been widely applied in the field of PCR (Polymerase Chain Reaction), immunoassays, organ-on-chip, stem cell research, and analysis and identification of circulating tumor cells. As a major step in drug discovery, high-throughput screening allows rapid analysis of thousands of chemical, biochemical, genetic or pharmacological tests in parallel. In this review, we summarize the application of microfluidics in cell-based high throughput screening. The screening methods mentioned in this paper include approaches using the perfusion flow mode, the droplet mode, and the microarray mode. We also discuss the future development of microfluidic based high throughput screening platform for drug discovery. Copyright © 2015 Elsevier B.V. All rights reserved.
Solar fuels photoanode materials discovery by integrating high-throughput theory and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Qimin; Yu, Jie; Suram, Santosh K.
The limited number of known low-band-gap photoelectrocatalytic materials poses a significant challenge for the generation of chemical fuels from sunlight. Here, using high-throughput ab initio theory with experiments in an integrated workflow, we find eight ternary vanadate oxide photoanodes in the target band-gap range (1.2-2.8 eV). Detailed analysis of these vanadate compounds reveals the key role of VO 4 structural motifs and electronic band-edge character in efficient photoanodes, initiating a genome for such materials and paving the way for a broadly applicable high-throughput-discovery and materials-by-design feedback loop. Considerably expanding the number of known photoelectrocatalysts for water oxidation, our study establishesmore » ternary metal vanadates as a prolific class of photoanodematerials for generation of chemical fuels from sunlight and demonstrates our high-throughput theory-experiment pipeline as a prolific approach to materials discovery.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
Development and Validation of an Automated High-Throughput System for Zebrafish In Vivo Screenings
Virto, Juan M.; Holgado, Olaia; Diez, Maria; Izpisua Belmonte, Juan Carlos; Callol-Massot, Carles
2012-01-01
The zebrafish is a vertebrate model compatible with the paradigms of drug discovery. The small size and transparency of zebrafish embryos make them amenable for the automation necessary in high-throughput screenings. We have developed an automated high-throughput platform for in vivo chemical screenings on zebrafish embryos that includes automated methods for embryo dispensation, compound delivery, incubation, imaging and analysis of the results. At present, two different assays to detect cardiotoxic compounds and angiogenesis inhibitors can be automatically run in the platform, showing the versatility of the system. A validation of these two assays with known positive and negative compounds, as well as a screening for the detection of unknown anti-angiogenic compounds, have been successfully carried out in the system developed. We present a totally automated platform that allows for high-throughput screenings in a vertebrate organism. PMID:22615792
BiQ Analyzer HT: locus-specific analysis of DNA methylation by high-throughput bisulfite sequencing
Lutsik, Pavlo; Feuerbach, Lars; Arand, Julia; Lengauer, Thomas; Walter, Jörn; Bock, Christoph
2011-01-01
Bisulfite sequencing is a widely used method for measuring DNA methylation in eukaryotic genomes. The assay provides single-base pair resolution and, given sufficient sequencing depth, its quantitative accuracy is excellent. High-throughput sequencing of bisulfite-converted DNA can be applied either genome wide or targeted to a defined set of genomic loci (e.g. using locus-specific PCR primers or DNA capture probes). Here, we describe BiQ Analyzer HT (http://biq-analyzer-ht.bioinf.mpi-inf.mpg.de/), a user-friendly software tool that supports locus-specific analysis and visualization of high-throughput bisulfite sequencing data. The software facilitates the shift from time-consuming clonal bisulfite sequencing to the more quantitative and cost-efficient use of high-throughput sequencing for studying locus-specific DNA methylation patterns. In addition, it is useful for locus-specific visualization of genome-wide bisulfite sequencing data. PMID:21565797
Yeow, Jonathan; Joshi, Sanket; Chapman, Robert; Boyer, Cyrille Andre Jean Marie
2018-04-25
Translating controlled/living radical polymerization (CLRP) from batch to the high throughput production of polymer libraries presents several challenges in terms of both polymer synthesis and characterization. Although recently there have been significant advances in the field of low volume, high throughput CLRP, techniques able to simultaneously monitor multiple polymerizations in an "online" manner have not yet been developed. Here, we report our discovery that 5,10,15,20-tetraphenyl-21H,23H-porphine zinc (ZnTPP) is a self-reporting photocatalyst that can mediate PET-RAFT polymerization as well as report on monomer conversion via changes in its fluorescence properties. This enables the use of a microplate reader to conduct high throughput "online" monitoring of PET-RAFT polymerizations performed directly in 384-well, low volume microtiter plates. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.; ...
2017-03-28
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
High Throughput Genotoxicity Profiling of the US EPA ToxCast Chemical Library
A key aim of the ToxCast project is to investigate modern molecular and genetic high content and high throughput screening (HTS) assays, along with various computational tools to supplement and perhaps replace traditional assays for evaluating chemical toxicity. Genotoxicity is a...
Stepping into the omics era: Opportunities and challenges for biomaterials science and engineering☆
Rabitz, Herschel; Welsh, William J.; Kohn, Joachim; de Boer, Jan
2016-01-01
The research paradigm in biomaterials science and engineering is evolving from using low-throughput and iterative experimental designs towards high-throughput experimental designs for materials optimization and the evaluation of materials properties. Computational science plays an important role in this transition. With the emergence of the omics approach in the biomaterials field, referred to as materiomics, high-throughput approaches hold the promise of tackling the complexity of materials and understanding correlations between material properties and their effects on complex biological systems. The intrinsic complexity of biological systems is an important factor that is often oversimplified when characterizing biological responses to materials and establishing property-activity relationships. Indeed, in vitro tests designed to predict in vivo performance of a given biomaterial are largely lacking as we are not able to capture the biological complexity of whole tissues in an in vitro model. In this opinion paper, we explain how we reached our opinion that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. PMID:26876875
Besaratinia, Ahmad; Li, Haiqing; Yoon, Jae-In; Zheng, Albert; Gao, Hanlin; Tommasi, Stella
2012-01-01
Many carcinogens leave a unique mutational fingerprint in the human genome. These mutational fingerprints manifest as specific types of mutations often clustering at certain genomic loci in tumor genomes from carcinogen-exposed individuals. To develop a high-throughput method for detecting the mutational fingerprint of carcinogens, we have devised a cost-, time- and labor-effective strategy, in which the widely used transgenic Big Blue® mouse mutation detection assay is made compatible with the Roche/454 Genome Sequencer FLX Titanium next-generation sequencing technology. As proof of principle, we have used this novel method to establish the mutational fingerprints of three prominent carcinogens with varying mutagenic potencies, including sunlight ultraviolet radiation, 4-aminobiphenyl and secondhand smoke that are known to be strong, moderate and weak mutagens, respectively. For verification purposes, we have compared the mutational fingerprints of these carcinogens obtained by our newly developed method with those obtained by parallel analyses using the conventional low-throughput approach, that is, standard mutation detection assay followed by direct DNA sequencing using a capillary DNA sequencer. We demonstrate that this high-throughput next-generation sequencing-based method is highly specific and sensitive to detect the mutational fingerprints of the tested carcinogens. The method is reproducible, and its accuracy is comparable with that of the currently available low-throughput method. In conclusion, this novel method has the potential to move the field of carcinogenesis forward by allowing high-throughput analysis of mutations induced by endogenous and/or exogenous genotoxic agents. PMID:22735701
Besaratinia, Ahmad; Li, Haiqing; Yoon, Jae-In; Zheng, Albert; Gao, Hanlin; Tommasi, Stella
2012-08-01
Many carcinogens leave a unique mutational fingerprint in the human genome. These mutational fingerprints manifest as specific types of mutations often clustering at certain genomic loci in tumor genomes from carcinogen-exposed individuals. To develop a high-throughput method for detecting the mutational fingerprint of carcinogens, we have devised a cost-, time- and labor-effective strategy, in which the widely used transgenic Big Blue mouse mutation detection assay is made compatible with the Roche/454 Genome Sequencer FLX Titanium next-generation sequencing technology. As proof of principle, we have used this novel method to establish the mutational fingerprints of three prominent carcinogens with varying mutagenic potencies, including sunlight ultraviolet radiation, 4-aminobiphenyl and secondhand smoke that are known to be strong, moderate and weak mutagens, respectively. For verification purposes, we have compared the mutational fingerprints of these carcinogens obtained by our newly developed method with those obtained by parallel analyses using the conventional low-throughput approach, that is, standard mutation detection assay followed by direct DNA sequencing using a capillary DNA sequencer. We demonstrate that this high-throughput next-generation sequencing-based method is highly specific and sensitive to detect the mutational fingerprints of the tested carcinogens. The method is reproducible, and its accuracy is comparable with that of the currently available low-throughput method. In conclusion, this novel method has the potential to move the field of carcinogenesis forward by allowing high-throughput analysis of mutations induced by endogenous and/or exogenous genotoxic agents.
Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C
2016-01-01
Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.
I describe research on high throughput exposure and toxicokinetics. These tools provide context for data generated by high throughput toxicity screening to allow risk-based prioritization of thousands of chemicals.
MIPHENO: Data normalization for high throughput metabolic analysis.
High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...
High-Throughput Pharmacokinetics for Environmental Chemicals (SOT)
High throughput screening (HTS) promises to allow prioritization of thousands of environmental chemicals with little or no in vivo information. For bioactivity identified by HTS, toxicokinetic (TK) models are essential to predict exposure thresholds below which no significant bio...
Gore, Brooklin
2018-02-01
This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.
Tschiersch, Henning; Junker, Astrid; Meyer, Rhonda C; Altmann, Thomas
2017-01-01
Automated plant phenotyping has been established as a powerful new tool in studying plant growth, development and response to various types of biotic or abiotic stressors. Respective facilities mainly apply non-invasive imaging based methods, which enable the continuous quantification of the dynamics of plant growth and physiology during developmental progression. However, especially for plants of larger size, integrative, automated and high throughput measurements of complex physiological parameters such as photosystem II efficiency determined through kinetic chlorophyll fluorescence analysis remain a challenge. We present the technical installations and the establishment of experimental procedures that allow the integrated high throughput imaging of all commonly determined PSII parameters for small and large plants using kinetic chlorophyll fluorescence imaging systems (FluorCam, PSI) integrated into automated phenotyping facilities (Scanalyzer, LemnaTec). Besides determination of the maximum PSII efficiency, we focused on implementation of high throughput amenable protocols recording PSII operating efficiency (Φ PSII ). Using the presented setup, this parameter is shown to be reproducibly measured in differently sized plants despite the corresponding variation in distance between plants and light source that caused small differences in incident light intensity. Values of Φ PSII obtained with the automated chlorophyll fluorescence imaging setup correlated very well with conventionally determined data using a spot-measuring chlorophyll fluorometer. The established high throughput operating protocols enable the screening of up to 1080 small and 184 large plants per hour, respectively. The application of the implemented high throughput protocols is demonstrated in screening experiments performed with large Arabidopsis and maize populations assessing natural variation in PSII efficiency. The incorporation of imaging systems suitable for kinetic chlorophyll fluorescence analysis leads to a substantial extension of the feature spectrum to be assessed in the presented high throughput automated plant phenotyping platforms, thus enabling the simultaneous assessment of plant architectural and biomass-related traits and their relations to physiological features such as PSII operating efficiency. The implemented high throughput protocols are applicable to a broad spectrum of model and crop plants of different sizes (up to 1.80 m height) and architectures. The deeper understanding of the relation of plant architecture, biomass formation and photosynthetic efficiency has a great potential with respect to crop and yield improvement strategies.
USDA-ARS?s Scientific Manuscript database
Field-based high-throughput phenotyping is an emerging approach to characterize difficult, time-sensitive plant traits in relevant growing conditions. Proximal sensing carts have been developed as an alternative platform to more costly high-clearance tractors for phenotyping dynamic traits in the fi...
High-throughput profiling and analysis of plant responses over time to abiotic stress
USDA-ARS?s Scientific Manuscript database
Energy sorghum (Sorghum bicolor (L.) Moench) is a rapidly growing, high-biomass, annual crop prized for abiotic stress tolerance. Measuring genotype-by-environment (G x E) interactions remains a progress bottleneck. High throughput phenotyping within controlled environments has been proposed as a po...
ToxCast Workflow: High-throughput screening assay data processing, analysis and management (SOT)
US EPA’s ToxCast program is generating data in high-throughput screening (HTS) and high-content screening (HCS) assays for thousands of environmental chemicals, for use in developing predictive toxicity models. Currently the ToxCast screening program includes over 1800 unique c...
A high-throughput multiplex method adapted for GMO detection.
Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique
2008-12-24
A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.
RIPiT-Seq: A high-throughput approach for footprinting RNA:protein complexes
Singh, Guramrit; Ricci, Emiliano P.; Moore, Melissa J.
2013-01-01
Development of high-throughput approaches to map the RNA interaction sites of individual RNA binding proteins (RBPs) transcriptome-wide is rapidly transforming our understanding of post-transcriptional gene regulatory mechanisms. Here we describe a ribonucleoprotein (RNP) footprinting approach we recently developed for identifying occupancy sites of both individual RBPs and multi-subunit RNP complexes. RNA:protein immunoprecipitation in tandem (RIPiT) yields highly specific RNA footprints of cellular RNPs isolated via two sequential purifications; the resulting RNA footprints can then be identified by high-throughput sequencing (Seq). RIPiT-Seq is broadly applicable to all RBPs regardless of their RNA binding mode and thus provides a means to map the RNA binding sites of RBPs with poor inherent ultraviolet (UV) crosslinkability. Further, among current high-throughput approaches, RIPiT has the unique capacity to differentiate binding sites of RNPs with overlapping protein composition. It is therefore particularly suited for studying dynamic RNP assemblages whose composition evolves as gene expression proceeds. PMID:24096052
Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang
2015-01-05
The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection.
Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang
2015-01-01
The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection. PMID:25556930
Proteogenemic Approaches for the Molecular Characterization of Natural Microbial Communities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banfield, Jillian F.
2014-04-25
Microbial biofilms involved in acid mine drainage formation have served as a model systems for the study of microbial communities. Over the grant period we developed community metagenomic methods for recovery of genomes of uncultivated bacteria, archaea, viruses/phage, and plasmids from natural systems. We leveraged highly curated metagenomic datasets to develop methods to monitor microbial function in situ. Beyond new insight into extremophilic microbial ecosystems, we have shown that our strain-resolved proteogenomic methods can be applied to other systems. Our studies have uncovered new patters of inter-species recombination that likely lead to fine-scale environmental adaptation, defined the importance of inter-speciesmore » vs. intra-species recombination in archaea, and evaluated the processes shaping fine-scale sequence variation. The project was the subject of study for six Ph.D. students, two of whom are now Associate Professors, the others are post docs; for M.S. or undergraduate researchers, and thirteen post docs, ten of which are now Assistant or Associate professors. The research generated 53 publications, five of which appeard in Science or Nature.« less
Sanchez-Lucas, Rosa; Mehta, Angela; Valledor, Luis; Cabello-Hurtado, Francisco; Romero-Rodrıguez, M Cristina; Simova-Stoilova, Lyudmila; Demir, Sekvan; Rodriguez-de-Francisco, Luis E; Maldonado-Alconada, Ana M; Jorrin-Prieto, Ana L; Jorrín-Novo, Jesus V
2016-03-01
The present review is an update of the previous one published in Proteomics 2015 Reviews special issue [Jorrin-Novo, J. V. et al., Proteomics 2015, 15, 1089-1112] covering the July 2014-2015 period. It has been written on the bases of the publications that appeared in Proteomics journal during that period and the most relevant ones that have been published in other high-impact journals. Methodological advances and the contribution of the field to the knowledge of plant biology processes and its translation to agroforestry and environmental sectors will be discussed. This review has been organized in four blocks, with a starting general introduction (literature survey) followed by sections focusing on the methodology (in vitro, in vivo, wet, and dry), proteomics integration with other approaches (systems biology and proteogenomics), biological information, and knowledge (cell communication, receptors, and signaling), ending with a brief mention of some other biological and translational topics to which proteomics has made some contribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A high-throughput label-free nanoparticle analyser.
Fraikin, Jean-Luc; Teesalu, Tambet; McKenney, Christopher M; Ruoslahti, Erkki; Cleland, Andrew N
2011-05-01
Synthetic nanoparticles and genetically modified viruses are used in a range of applications, but high-throughput analytical tools for the physical characterization of these objects are needed. Here we present a microfluidic analyser that detects individual nanoparticles and characterizes complex, unlabelled nanoparticle suspensions. We demonstrate the detection, concentration analysis and sizing of individual synthetic nanoparticles in a multicomponent mixture with sufficient throughput to analyse 500,000 particles per second. We also report the rapid size and titre analysis of unlabelled bacteriophage T7 in both salt solution and mouse blood plasma, using just ~1 × 10⁻⁶ l of analyte. Unexpectedly, in the native blood plasma we discover a large background of naturally occurring nanoparticles with a power-law size distribution. The high-throughput detection capability, scalable fabrication and simple electronics of this instrument make it well suited for diverse applications.
High Performance Computing Modernization Program Kerberos Throughput Test Report
2017-10-26
functionality as Kerberos plugins. The pre -release production kit was used in these tests to compare against the current release kit. YubiKey support...HPCMP Kerberos Throughput Test Report 3 2. THROUGHPUT TESTING 2.1 Testing Components Throughput testing was done to determine the benefits of the pre ...both the current release kit and the pre -release production kit for a total of 378 individual tests in order to note any improvements. Based on work
Metabolomics Approach for Toxicity Screening of Volatile Substances
In 2007 the National Research Council envisioned the need for inexpensive, high throughput, cell based toxicity testing methods relevant to human health. High Throughput Screening (HTS) in vitro screening approaches have addressed these problems by using robotics. However, the ch...
AOPs & Biomarkers: Bridging High Throughput Screening and Regulatory Decision Making.
As high throughput screening (HTS) approaches play a larger role in toxicity testing, computational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models for this purpose are becoming increasingly more sophisticated...
New High Throughput Methods to Estimate Chemical Exposure
EPA has made many recent advances in high throughput bioactivity testing. However, concurrent advances in rapid, quantitative prediction of human and ecological exposures have been lacking, despite the clear importance of both measures for a risk-based approach to prioritizing an...
Fully Bayesian Analysis of High-throughput Targeted Metabolomics Assays
High-throughput metabolomic assays that allow simultaneous targeted screening of hundreds of metabolites have recently become available in kit form. Such assays provide a window into understanding changes to biochemical pathways due to chemical exposure or disease, and are usefu...
Leulliot, Nicolas; Trésaugues, Lionel; Bremang, Michael; Sorel, Isabelle; Ulryck, Nathalie; Graille, Marc; Aboulfath, Ilham; Poupon, Anne; Liger, Dominique; Quevillon-Cheruel, Sophie; Janin, Joël; van Tilbeurgh, Herman
2005-06-01
Crystallization has long been regarded as one of the major bottlenecks in high-throughput structural determination by X-ray crystallography. Structural genomics projects have addressed this issue by using robots to set up automated crystal screens using nanodrop technology. This has moved the bottleneck from obtaining the first crystal hit to obtaining diffraction-quality crystals, as crystal optimization is a notoriously slow process that is difficult to automatize. This article describes the high-throughput optimization strategies used in the Yeast Structural Genomics project, with selected successful examples.
Towards sensitive, high-throughput, biomolecular assays based on fluorescence lifetime
NASA Astrophysics Data System (ADS)
Ioanna Skilitsi, Anastasia; Turko, Timothé; Cianfarani, Damien; Barre, Sophie; Uhring, Wilfried; Hassiepen, Ulrich; Léonard, Jérémie
2017-09-01
Time-resolved fluorescence detection for robust sensing of biomolecular interactions is developed by implementing time-correlated single photon counting in high-throughput conditions. Droplet microfluidics is used as a promising platform for the very fast handling of low-volume samples. We illustrate the potential of this very sensitive and cost-effective technology in the context of an enzymatic activity assay based on fluorescently-labeled biomolecules. Fluorescence lifetime detection by time-correlated single photon counting is shown to enable reliable discrimination between positive and negative control samples at a throughput as high as several hundred samples per second.
High Throughput Determination of Critical Human Dosing ...
High throughput toxicokinetics (HTTK) is a rapid approach that uses in vitro data to estimate TK for hundreds of environmental chemicals. Reverse dosimetry (i.e., reverse toxicokinetics or RTK) based on HTTK data converts high throughput in vitro toxicity screening (HTS) data into predicted human equivalent doses that can be linked with biologically relevant exposure scenarios. Thus, HTTK provides essential data for risk prioritization for thousands of chemicals that lack TK data. One critical HTTK parameter that can be measured in vitro is the unbound fraction of a chemical in plasma (Fub). However, for chemicals that bind strongly to plasma, Fub is below the limits of detection (LOD) for high throughput analytical chemistry, and therefore cannot be quantified. A novel method for quantifying Fub was implemented for 85 strategically selected chemicals: measurement of Fub was attempted at 10%, 30%, and 100% of physiological plasma concentrations using rapid equilibrium dialysis assays. Varying plasma concentrations instead of chemical concentrations makes high throughput analytical methodology more likely to be successful. Assays at 100% plasma concentration were unsuccessful for 34 chemicals. For 12 of these 34 chemicals, Fub could be quantified at 10% and/or 30% plasma concentrations; these results imply that the assay failure at 100% plasma concentration was caused by plasma protein binding for these chemicals. Assay failure for the remaining 22 chemicals may
Genome-wide RNAi Screening to Identify Host Factors That Modulate Oncolytic Virus Therapy.
Allan, Kristina J; Mahoney, Douglas J; Baird, Stephen D; Lefebvre, Charles A; Stojdl, David F
2018-04-03
High-throughput genome-wide RNAi (RNA interference) screening technology has been widely used for discovering host factors that impact virus replication. Here we present the application of this technology to uncovering host targets that specifically modulate the replication of Maraba virus, an oncolytic rhabdovirus, and vaccinia virus with the goal of enhancing therapy. While the protocol has been tested for use with oncolytic Maraba virus and oncolytic vaccinia virus, this approach is applicable to other oncolytic viruses and can also be utilized for identifying host targets that modulate virus replication in mammalian cells in general. This protocol describes the development and validation of an assay for high-throughput RNAi screening in mammalian cells, the key considerations and preparation steps important for conducting a primary high-throughput RNAi screen, and a step-by-step guide for conducting a primary high-throughput RNAi screen; in addition, it broadly outlines the methods for conducting secondary screen validation and tertiary validation studies. The benefit of high-throughput RNAi screening is that it allows one to catalogue, in an extensive and unbiased fashion, host factors that modulate any aspect of virus replication for which one can develop an in vitro assay such as infectivity, burst size, and cytotoxicity. It has the power to uncover biotherapeutic targets unforeseen based on current knowledge.
Schieferstein, Jeremy M.; Pawate, Ashtamurthy S.; Wan, Frank; Sheraden, Paige N.; Broecker, Jana; Ernst, Oliver P.; Gennis, Robert B.
2017-01-01
Elucidating and clarifying the function of membrane proteins ultimately requires atomic resolution structures as determined most commonly by X-ray crystallography. Many high impact membrane protein structures have resulted from advanced techniques such as in meso crystallization that present technical difficulties for the set-up and scale-out of high-throughput crystallization experiments. In prior work, we designed a novel, low-throughput X-ray transparent microfluidic device that automated the mixing of protein and lipid by diffusion for in meso crystallization trials. Here, we report X-ray transparent microfluidic devices for high-throughput crystallization screening and optimization that overcome the limitations of scale and demonstrate their application to the crystallization of several membrane proteins. Two complementary chips are presented: (1) a high-throughput screening chip to test 192 crystallization conditions in parallel using as little as 8 nl of membrane protein per well and (2) a crystallization optimization chip to rapidly optimize preliminary crystallization hits through fine-gradient re-screening. We screened three membrane proteins for new in meso crystallization conditions, identifying several preliminary hits that we tested for X-ray diffraction quality. Further, we identified and optimized the crystallization condition for a photosynthetic reaction center mutant and solved its structure to a resolution of 3.5 Å. PMID:28469762
High-throughput transformation of Saccharomyces cerevisiae using liquid handling robots.
Liu, Guangbo; Lanham, Clayton; Buchan, J Ross; Kaplan, Matthew E
2017-01-01
Saccharomyces cerevisiae (budding yeast) is a powerful eukaryotic model organism ideally suited to high-throughput genetic analyses, which time and again has yielded insights that further our understanding of cell biology processes conserved in humans. Lithium Acetate (LiAc) transformation of yeast with DNA for the purposes of exogenous protein expression (e.g., plasmids) or genome mutation (e.g., gene mutation, deletion, epitope tagging) is a useful and long established method. However, a reliable and optimized high throughput transformation protocol that runs almost no risk of human error has not been described in the literature. Here, we describe such a method that is broadly transferable to most liquid handling high-throughput robotic platforms, which are now commonplace in academic and industry settings. Using our optimized method, we are able to comfortably transform approximately 1200 individual strains per day, allowing complete transformation of typical genomic yeast libraries within 6 days. In addition, use of our protocol for gene knockout purposes also provides a potentially quicker, easier and more cost-effective approach to generating collections of double mutants than the popular and elegant synthetic genetic array methodology. In summary, our methodology will be of significant use to anyone interested in high throughput molecular and/or genetic analysis of yeast.
High-Throughput Toxicity Testing: New Strategies for ...
In recent years, the food industry has made progress in improving safety testing methods focused on microbial contaminants in order to promote food safety. However, food industry toxicologists must also assess the safety of food-relevant chemicals including pesticides, direct additives, and food contact substances. With the rapidly growing use of new food additives, as well as innovation in food contact substance development, an interest in exploring the use of high-throughput chemical safety testing approaches has emerged. Currently, the field of toxicology is undergoing a paradigm shift in how chemical hazards can be evaluated. Since there are tens of thousands of chemicals in use, many of which have little to no hazard information and there are limited resources (namely time and money) for testing these chemicals, it is necessary to prioritize which chemicals require further safety testing to better protect human health. Advances in biochemistry and computational toxicology have paved the way for animal-free (in vitro) high-throughput screening which can characterize chemical interactions with highly specific biological processes. Screening approaches are not novel; in fact, quantitative high-throughput screening (qHTS) methods that incorporate dose-response evaluation have been widely used in the pharmaceutical industry. For toxicological evaluation and prioritization, it is the throughput as well as the cost- and time-efficient nature of qHTS that makes it
Kariithi, Henry M.; Cousserans, François; Parker, Nicolas J.; İnce, İkbal Agah; Scully, Erin D.; Boeren, Sjef; Geib, Scott M.; Mekonnen, Solomon; Vlak, Just M.; Parker, Andrew G.; Vreysen, Marc J. B.; Bergoin, Max
2016-01-01
Glossina pallidipes salivary gland hypertrophy virus (GpSGHV; family Hytrosaviridae) can establish asymptomatic and symptomatic infection in its tsetse fly host. Here, we present a comprehensive annotation of the genome of an Ethiopian GpSGHV isolate (GpSGHV-Eth) compared with the reference Ugandan GpSGHV isolate (GpSGHV-Uga; GenBank accession number EF568108). GpSGHV-Eth has higher salivary gland hypertrophy syndrome prevalence than GpSGHV-Uga. We show that the GpSGHV-Eth genome has 190 291 nt, a low G+C content (27.9 %) and encodes 174 putative ORFs. Using proteogenomic and transcriptome mapping, 141 and 86 ORFs were mapped by transcripts and peptides, respectively. Furthermore, of the 174 ORFs, 132 had putative transcriptional signals [TATA-like box and poly(A) signals]. Sixty ORFs had both TATA-like box promoter and poly(A) signals, and mapped by both transcripts and peptides, implying that these ORFs encode functional proteins. Of the 60 ORFs, 10 ORFs are homologues to baculovirus and nudivirus core genes, including three per os infectivity factors and four RNA polymerase subunits (LEF4, 5, 8 and 9). Whereas GpSGHV-Eth and GpSGHV-Uga are 98.1 % similar at the nucleotide level, 37 ORFs in the GpSGHV-Eth genome had nucleotide insertions (n = 17) and deletions (n = 20) compared with their homologues in GpSGHV-Uga. Furthermore, compared with the GpSGHV-Uga genome, 11 and 24 GpSGHV ORFs were deleted and novel, respectively. Further, 13 GpSGHV-Eth ORFs were non-canonical; they had either CTG or TTG start codons instead of ATG. Taken together, these data suggest that GpSGHV-Eth and GpSGHV-Uga represent two different lineages of the same virus. Genetic differences combined with host and environmental factors possibly explain the differential GpSGHV pathogenesis observed in different G. pallidipes colonies. PMID:26801744
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takamiya, Mari; Discovery Technology Laboratories, Sohyaku, Innovative Research Division, Mitsubishi Tanabe Pharma Corporation, Kawagishi, Toda-shi, Saitama; Sakurai, Masaaki
A high-throughput RapidFire mass spectrometry assay is described for elongation of very long-chain fatty acids family 6 (Elovl6). Elovl6 is a microsomal enzyme that regulates the elongation of C12-16 saturated and monounsaturated fatty acids. Elovl6 may be a new therapeutic target for fat metabolism disorders such as obesity, type 2 diabetes, and nonalcoholic steatohepatitis. To identify new Elovl6 inhibitors, we developed a high-throughput fluorescence screening assay in 1536-well format. However, a number of false positives caused by fluorescent interference have been identified. To pick up the real active compounds among the primary hits from the fluorescence assay, we developed amore » RapidFire mass spectrometry assay and a conventional radioisotope assay. These assays have the advantage of detecting the main products directly without using fluorescent-labeled substrates. As a result, 276 compounds (30%) of the primary hits (921 compounds) in a fluorescence ultra-high-throughput screening method were identified as common active compounds in these two assays. It is concluded that both methods are very effective to eliminate false positives. Compared with the radioisotope method using an expensive {sup 14}C-labeled substrate, the RapidFire mass spectrometry method using unlabeled substrates is a high-accuracy, high-throughput method. In addition, some of the hit compounds selected from the screening inhibited cellular fatty acid elongation in HEK293 cells expressing Elovl6 transiently. This result suggests that these compounds may be promising lead candidates for therapeutic drugs. Ultra-high-throughput fluorescence screening followed by a RapidFire mass spectrometry assay was a suitable strategy for lead discovery against Elovl6. - Highlights: • A novel assay for elongation of very-long-chain fatty acids 6 (Elovl6) is proposed. • RapidFire mass spectrometry (RF-MS) assay is useful to select real screening hits. • RF-MS assay is proved to be beneficial because of its high-throughput and accuracy. • A combination of fluorescent and RF-MS assays is effective for Elovl6 inhibitors.« less
Little is known about the developmental toxicity of the expansive chemical landscape in existence today. Significant efforts are being made to apply novel methods to predict developmental activity of chemicals utilizing high-throughput screening (HTS) and high-content screening (...
High-throughput assays that can quantify chemical-induced changes at the cellular and molecular level have been recommended for use in chemical safety assessment. High-throughput, high content imaging assays for the key cellular events of neurodevelopment have been proposed to ra...
Evaluation of sequencing approaches for high-throughput toxicogenomics (SOT)
Whole-genome in vitro transcriptomics has shown the capability to identify mechanisms of action and estimates of potency for chemical-mediated effects in a toxicological framework, but with limited throughput and high cost. We present the evaluation of three toxicogenomics platfo...
High Throughput Assays and Exposure Science (ISES annual meeting)
High throughput screening (HTS) data characterizing chemical-induced biological activity has been generated for thousands of environmentally-relevant chemicals by the US inter-agency Tox21 and the US EPA ToxCast programs. For a limited set of chemicals, bioactive concentrations r...
High Throughput Exposure Estimation Using NHANES Data (SOT)
In the ExpoCast project, high throughput (HT) exposure models enable rapid screening of large numbers of chemicals for exposure potential. Evaluation of these models requires empirical exposure data and due to the paucity of human metabolism/exposure data such evaluations includ...
Atlanta I-85 HOV-to-HOT conversion : analysis of vehicle and person throughput.
DOT National Transportation Integrated Search
2013-10-01
This report summarizes the vehicle and person throughput analysis for the High Occupancy Vehicle to High Occupancy Toll Lane : conversion in Atlanta, GA, undertaken by the Georgia Institute of Technology research team. The team tracked changes in : o...
Embryonic vascular disruption is an important adverse outcome pathway (AOP) given the knowledge that chemical disruption of early cardiovascular system development leads to broad prenatal defects. High throughput screening (HTS) assays provide potential building blocks for AOP d...
Accounting For Uncertainty in The Application Of High Throughput Datasets
The use of high throughput screening (HTS) datasets will need to adequately account for uncertainties in the data generation process and propagate these uncertainties through to ultimate use. Uncertainty arises at multiple levels in the construction of predictors using in vitro ...
Jordan, Scott
2018-01-24
Scott Jordan on "Advances in high-throughput speed, low-latency communication for embedded instrumentation" at the 2012 Sequencing, Finishing, Analysis in the Future Meeting held June 5-7, 2012 in Santa Fe, New Mexico.
Inter-Individual Variability in High-Throughput Risk Prioritization of Environmental Chemicals (Sot)
We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have...
We incorporate inter-individual variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which hav...
HIGH-THROUGHPUT IDENTIFICATION OF CATALYTIC REDOX-ACTIVE CYSTEINE RESIDUES
Cysteine (Cys) residues often play critical roles in proteins; however, identification of their specific functions has been limited to case-by-case experimental approaches. We developed a procedure for high-throughput identification of catalytic redox-active Cys in proteins by se...
Development of a thyroperoxidase inhibition assay for high-throughput screening
High-throughput screening (HTPS) assays to detect inhibitors of thyroperoxidase (TPO), the enzymatic catalyst for thyroid hormone (TH) synthesis, are not currently available. Herein we describe the development of a HTPS TPO inhibition assay. Rat thyroid microsomes and a fluores...
High-throughput screening, predictive modeling and computational embryology - Abstract
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to chemical profiling to address sensitivity and specificity of molecular targets, biological pathways, cellular and developmental processes. EPA’s ToxCast project is testing 960 uniq...
Evaluating and Refining High Throughput Tools for Toxicokinetics
This poster summarizes efforts of the Chemical Safety for Sustainability's Rapid Exposure and Dosimetry (RED) team to facilitate the development and refinement of toxicokinetics (TK) tools to be used in conjunction with the high throughput toxicity testing data generated as a par...
Picking Cell Lines for High-Throughput Transcriptomic Toxicity Screening (SOT)
High throughput, whole genome transcriptomic profiling is a promising approach to comprehensively evaluate chemicals for potential biological effects. To be useful for in vitro toxicity screening, gene expression must be quantified in a set of representative cell types that captu...
Streamlined approaches that use in vitro experimental data to predict chemical toxicokinetics (TK) are increasingly being used to perform risk-based prioritization based upon dosimetric adjustment of high-throughput screening (HTS) data across thousands of chemicals. However, ass...
A rapid enzymatic assay for high-throughput screening of adenosine-producing strains
Dong, Huina; Zu, Xin; Zheng, Ping; Zhang, Dawei
2015-01-01
Adenosine is a major local regulator of tissue function and industrially useful as precursor for the production of medicinal nucleoside substances. High-throughput screening of adenosine overproducers is important for industrial microorganism breeding. An enzymatic assay of adenosine was developed by combined adenosine deaminase (ADA) with indophenol method. The ADA catalyzes the cleavage of adenosine to inosine and NH3, the latter can be accurately determined by indophenol method. The assay system was optimized to deliver a good performance and could tolerate the addition of inorganic salts and many nutrition components to the assay mixtures. Adenosine could be accurately determined by this assay using 96-well microplates. Spike and recovery tests showed that this assay can accurately and reproducibly determine increases in adenosine in fermentation broth without any pretreatment to remove proteins and potentially interfering low-molecular-weight molecules. This assay was also applied to high-throughput screening for high adenosine-producing strains. The high selectivity and accuracy of the ADA assay provides rapid and high-throughput analysis of adenosine in large numbers of samples. PMID:25580842
Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).
Bałut, Magdalena; Buckley, Patrick G.; Ochocka, J. Renata; Bartoszewski, Rafał; Crossman, David K.; Messiaen, Ludwine M.; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp). PMID:29432475
A comparison of high-throughput techniques for assaying circadian rhythms in plants.
Tindall, Andrew J; Waller, Jade; Greenwood, Mark; Gould, Peter D; Hartwell, James; Hall, Anthony
2015-01-01
Over the last two decades, the development of high-throughput techniques has enabled us to probe the plant circadian clock, a key coordinator of vital biological processes, in ways previously impossible. With the circadian clock increasingly implicated in key fitness and signalling pathways, this has opened up new avenues for understanding plant development and signalling. Our tool-kit has been constantly improving through continual development and novel techniques that increase throughput, reduce costs and allow higher resolution on the cellular and subcellular levels. With circadian assays becoming more accessible and relevant than ever to researchers, in this paper we offer a review of the techniques currently available before considering the horizons in circadian investigation at ever higher throughputs and resolutions.
Turning tumor-promoting copper into an anti-cancer weapon via high-throughput chemistry.
Wang, F; Jiao, P; Qi, M; Frezza, M; Dou, Q P; Yan, B
2010-01-01
Copper is an essential element for multiple biological processes. Its concentration is elevated to a very high level in cancer tissues for promoting cancer development through processes such as angiogenesis. Organic chelators of copper can passively reduce cellular copper and serve the role as inhibitors of angiogenesis. However, they can also actively attack cellular targets such as proteasome, which plays a critical role in cancer development and survival. The discovery of such molecules initially relied on a step by step synthesis followed by biological assays. Today high-throughput chemistry and high-throughput screening have significantly expedited the copper-binding molecules discovery to turn "cancer-promoting" copper into anti-cancer agents.
Camilo, Cesar M; Lima, Gustavo M A; Maluf, Fernando V; Guido, Rafael V C; Polikarpov, Igor
2016-01-01
Following burgeoning genomic and transcriptomic sequencing data, biochemical and molecular biology groups worldwide are implementing high-throughput cloning and mutagenesis facilities in order to obtain a large number of soluble proteins for structural and functional characterization. Since manual primer design can be a time-consuming and error-generating step, particularly when working with hundreds of targets, the automation of primer design process becomes highly desirable. HTP-OligoDesigner was created to provide the scientific community with a simple and intuitive online primer design tool for both laboratory-scale and high-throughput projects of sequence-independent gene cloning and site-directed mutagenesis and a Tm calculator for quick queries.
A high performance hardware implementation image encryption with AES algorithm
NASA Astrophysics Data System (ADS)
Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab
2011-06-01
This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.
Quesada-Cabrera, Raul; Weng, Xiaole; Hyett, Geoff; Clark, Robin J H; Wang, Xue Z; Darr, Jawwad A
2013-09-09
High-throughput continuous hydrothermal flow synthesis was used to manufacture 66 unique nanostructured oxide samples in the Ce-Zr-Y-O system. This synthesis approach resulted in a significant increase in throughput compared to that of conventional batch or continuous hydrothermal synthesis methods. The as-prepared library samples were placed into a wellplate for both automated high-throughput powder X-ray diffraction and Raman spectroscopy data collection, which allowed comprehensive structural characterization and phase mapping. The data suggested that a continuous cubic-like phase field connects all three Ce-Zr-O, Ce-Y-O, and Y-Zr-O binary systems together with a smooth and steady transition between the structures of neighboring compositions. The continuous hydrothermal process led to as-prepared crystallite sizes in the range of 2-7 nm (as determined by using the Scherrer equation).
State of the Art High-Throughput Approaches to Genotoxicity: Flow Micronucleus, Ames II, GreenScreen and Comet (Presented by Dr. Marilyn J. Aardema, Chief Scientific Advisor, Toxicology, Dr. Leon Stankowski, et. al. (6/28/2012)
Fun with High Throughput Toxicokinetics (CalEPA webinar)
Thousands of chemicals have been profiled by high-throughput screening (HTS) programs such as ToxCast and Tox21. These chemicals are tested in part because there are limited or no data on hazard, exposure, or toxicokinetics (TK). TK models aid in predicting tissue concentrations ...
Incorporating Human Dosimetry and Exposure into High-Throughput In Vitro Toxicity Screening
Many chemicals in commerce today have undergone limited or no safety testing. To reduce the number of untested chemicals and prioritize limited testing resources, several governmental programs are using high-throughput in vitro screens for assessing chemical effects across multip...
Environmental Impact on Vascular Development Predicted by High Throughput Screening
Understanding health risks to embryonic development from exposure to environmental chemicals is a significant challenge given the diverse chemical landscape and paucity of data for most of these compounds. High throughput screening (HTS) in EPA’s ToxCastTM project provides vast d...
High-Throughput Dietary Exposure Predictions for Chemical Migrants from Food Packaging Materials
United States Environmental Protection Agency researchers have developed a Stochastic Human Exposure and Dose Simulation High -Throughput (SHEDS-HT) model for use in prioritization of chemicals under the ExpoCast program. In this research, new methods were implemented in SHEDS-HT...
AOPs and Biomarkers: Bridging High Throughput Screening and Regulatory Decision Making
As high throughput screening (HTS) plays a larger role in toxicity testing, camputational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models designed to quantify potential adverse effects based on HTS data will b...
We incorporate inter-individual variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which hav...
HTTK: R Package for High-Throughput Toxicokinetics
Thousands of chemicals have been profiled by high-throughput screening programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics. Toxicokinetic models aid in predicting tissue concent...
tcpl: The ToxCast Pipeline for High-Throughput Screening Data
Motivation: The large and diverse high-throughput chemical screening efforts carried out by the US EPAToxCast program requires an efficient, transparent, and reproducible data pipeline.Summary: The tcpl R package and its associated MySQL database provide a generalized platform fo...
High-throughput screening, predictive modeling and computational embryology
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to profile thousands of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition to projects worldwide,...
In Vitro Toxicity Screening Technique for Volatile Substances Using Flow-Through System#
In 2007 the National Research Council envisioned the need for inexpensive, high throughput, cell based toxicity testing methods relevant to human health. High Throughput Screening (HTS) in vitro screening approaches have addressed these problems by using robotics. However the cha...
High-throughput in vitro toxicity screening can provide an efficient way to identify potential biological targets for chemicals. However, relying on nominal assay concentrations may misrepresent potential in vivo effects of these chemicals due to differences in bioavailability, c...
High-Throughput Toxicokinetics (HTTK) R package (CompTox CoP presentation)
Toxicokinetics (TK) provides a bridge between HTS and HTE by predicting tissue concentrations due to exposure, but traditional TK methods are resource intensive. Relatively high throughput TK (HTTK) methods have been used by the pharmaceutical industry to determine range of effic...
NASA Astrophysics Data System (ADS)
Lagus, Todd P.; Edd, Jon F.
2013-03-01
Most cell biology experiments are performed in bulk cell suspensions where cell secretions become diluted and mixed in a contiguous sample. Confinement of single cells to small, picoliter-sized droplets within a continuous phase of oil provides chemical isolation of each cell, creating individual microreactors where rare cell qualities are highlighted and otherwise undetectable signals can be concentrated to measurable levels. Recent work in microfluidics has yielded methods for the encapsulation of cells in aqueous droplets and hydrogels at kilohertz rates, creating the potential for millions of parallel single-cell experiments. However, commercial applications of high-throughput microdroplet generation and downstream sensing and actuation methods are still emerging for cells. Using fluorescence-activated cell sorting (FACS) as a benchmark for commercially available high-throughput screening, this focused review discusses the fluid physics of droplet formation, methods for cell encapsulation in liquids and hydrogels, sensors and actuators and notable biological applications of high-throughput single-cell droplet microfluidics.
High-Throughput Cloning and Expression Library Creation for Functional Proteomics
Festa, Fernanda; Steel, Jason; Bian, Xiaofang; Labaer, Joshua
2013-01-01
The study of protein function usually requires the use of a cloned version of the gene for protein expression and functional assays. This strategy is particular important when the information available regarding function is limited. The functional characterization of the thousands of newly identified proteins revealed by genomics requires faster methods than traditional single gene experiments, creating the need for fast, flexible and reliable cloning systems. These collections of open reading frame (ORF) clones can be coupled with high-throughput proteomics platforms, such as protein microarrays and cell-based assays, to answer biological questions. In this tutorial we provide the background for DNA cloning, discuss the major high-throughput cloning systems (Gateway® Technology, Flexi® Vector Systems, and Creator™ DNA Cloning System) and compare them side-by-side. We also report an example of high-throughput cloning study and its application in functional proteomics. This Tutorial is part of the International Proteomics Tutorial Programme (IPTP12). Details can be found at http://www.proteomicstutorials.org. PMID:23457047
High-Throughput Lectin Microarray-Based Analysis of Live Cell Surface Glycosylation
Li, Yu; Tao, Sheng-ce; Zhu, Heng; Schneck, Jonathan P.
2011-01-01
Lectins, plant-derived glycan-binding proteins, have long been used to detect glycans on cell surfaces. However, the techniques used to characterize serum or cells have largely been limited to mass spectrometry, blots, flow cytometry, and immunohistochemistry. While these lectin-based approaches are well established and they can discriminate a limited number of sugar isomers by concurrently using a limited number of lectins, they are not amenable for adaptation to a high-throughput platform. Fortunately, given the commercial availability of lectins with a variety of glycan specificities, lectins can be printed on a glass substrate in a microarray format to profile accessible cell-surface glycans. This method is an inviting alternative for analysis of a broad range of glycans in a high-throughput fashion and has been demonstrated to be a feasible method of identifying binding-accessible cell surface glycosylation on living cells. The current unit presents a lectin-based microarray approach for analyzing cell surface glycosylation in a high-throughput fashion. PMID:21400689
Niland, Courtney N.; Jankowsky, Eckhard; Harris, Michael E.
2016-01-01
Quantification of the specificity of RNA binding proteins and RNA processing enzymes is essential to understanding their fundamental roles in biological processes. High Throughput Sequencing Kinetics (HTS-Kin) uses high throughput sequencing and internal competition kinetics to simultaneously monitor the processing rate constants of thousands of substrates by RNA processing enzymes. This technique has provided unprecedented insight into the substrate specificity of the tRNA processing endonuclease ribonuclease P. Here, we investigate the accuracy and robustness of measurements associated with each step of the HTS-Kin procedure. We examine the effect of substrate concentration on the observed rate constant, determine the optimal kinetic parameters, and provide guidelines for reducing error in amplification of the substrate population. Importantly, we find that high-throughput sequencing, and experimental reproducibility contribute their own sources of error, and these are the main sources of imprecision in the quantified results when otherwise optimized guidelines are followed. PMID:27296633
Fujimori, Shigeo; Hirai, Naoya; Ohashi, Hiroyuki; Masuoka, Kazuyo; Nishikimi, Akihiko; Fukui, Yoshinori; Washio, Takanori; Oshikubo, Tomohiro; Yamashita, Tatsuhiro; Miyamoto-Sato, Etsuko
2012-01-01
Next-generation sequencing (NGS) has been applied to various kinds of omics studies, resulting in many biological and medical discoveries. However, high-throughput protein-protein interactome datasets derived from detection by sequencing are scarce, because protein-protein interaction analysis requires many cell manipulations to examine the interactions. The low reliability of the high-throughput data is also a problem. Here, we describe a cell-free display technology combined with NGS that can improve both the coverage and reliability of interactome datasets. The completely cell-free method gives a high-throughput and a large detection space, testing the interactions without using clones. The quantitative information provided by NGS reduces the number of false positives. The method is suitable for the in vitro detection of proteins that interact not only with the bait protein, but also with DNA, RNA and chemical compounds. Thus, it could become a universal approach for exploring the large space of protein sequences and interactome networks. PMID:23056904
NCBI GEO: archive for high-throughput functional genomic data.
Barrett, Tanya; Troup, Dennis B; Wilhite, Stephen E; Ledoux, Pierre; Rudnev, Dmitry; Evangelista, Carlos; Kim, Irene F; Soboleva, Alexandra; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Muertter, Rolf N; Edgar, Ron
2009-01-01
The Gene Expression Omnibus (GEO) at the National Center for Biotechnology Information (NCBI) is the largest public repository for high-throughput gene expression data. Additionally, GEO hosts other categories of high-throughput functional genomic data, including those that examine genome copy number variations, chromatin structure, methylation status and transcription factor binding. These data are generated by the research community using high-throughput technologies like microarrays and, more recently, next-generation sequencing. The database has a flexible infrastructure that can capture fully annotated raw and processed data, enabling compliance with major community-derived scientific reporting standards such as 'Minimum Information About a Microarray Experiment' (MIAME). In addition to serving as a centralized data storage hub, GEO offers many tools and features that allow users to effectively explore, analyze and download expression data from both gene-centric and experiment-centric perspectives. This article summarizes the GEO repository structure, content and operating procedures, as well as recently introduced data mining features. GEO is freely accessible at http://www.ncbi.nlm.nih.gov/geo/.
Khan, Arifa S; Vacante, Dominick A; Cassart, Jean-Pol; Ng, Siemon H S; Lambert, Christophe; Charlebois, Robert L; King, Kathryn E
Several nucleic-acid based technologies have recently emerged with capabilities for broad virus detection. One of these, high throughput sequencing, has the potential for novel virus detection because this method does not depend upon prior viral sequence knowledge. However, the use of high throughput sequencing for testing biologicals poses greater challenges as compared to other newly introduced tests due to its technical complexities and big data bioinformatics. Thus, the Advanced Virus Detection Technologies Users Group was formed as a joint effort by regulatory and industry scientists to facilitate discussions and provide a forum for sharing data and experiences using advanced new virus detection technologies, with a focus on high throughput sequencing technologies. The group was initiated as a task force that was coordinated by the Parenteral Drug Association and subsequently became the Advanced Virus Detection Technologies Interest Group to continue efforts for using new technologies for detection of adventitious viruses with broader participation, including international government agencies, academia, and technology service providers. © PDA, Inc. 2016.
The application of the high throughput sequencing technology in the transposable elements.
Liu, Zhen; Xu, Jian-hong
2015-09-01
High throughput sequencing technology has dramatically improved the efficiency of DNA sequencing, and decreased the costs to a great extent. Meanwhile, this technology usually has advantages of better specificity, higher sensitivity and accuracy. Therefore, it has been applied to the research on genetic variations, transcriptomics and epigenomics. Recently, this technology has been widely employed in the studies of transposable elements and has achieved fruitful results. In this review, we summarize the application of high throughput sequencing technology in the fields of transposable elements, including the estimation of transposon content, preference of target sites and distribution, insertion polymorphism and population frequency, identification of rare copies, transposon horizontal transfers as well as transposon tagging. We also briefly introduce the major common sequencing strategies and algorithms, their advantages and disadvantages, and the corresponding solutions. Finally, we envision the developing trends of high throughput sequencing technology, especially the third generation sequencing technology, and its application in transposon studies in the future, hopefully providing a comprehensive understanding and reference for related scientific researchers.
Near-common-path interferometer for imaging Fourier-transform spectroscopy in wide-field microscopy
Wadduwage, Dushan N.; Singh, Vijay Raj; Choi, Heejin; Yaqoob, Zahid; Heemskerk, Hans; Matsudaira, Paul; So, Peter T. C.
2017-01-01
Imaging Fourier-transform spectroscopy (IFTS) is a powerful method for biological hyperspectral analysis based on various imaging modalities, such as fluorescence or Raman. Since the measurements are taken in the Fourier space of the spectrum, it can also take advantage of compressed sensing strategies. IFTS has been readily implemented in high-throughput, high-content microscope systems based on wide-field imaging modalities. However, there are limitations in existing wide-field IFTS designs. Non-common-path approaches are less phase-stable. Alternatively, designs based on the common-path Sagnac interferometer are stable, but incompatible with high-throughput imaging. They require exhaustive sequential scanning over large interferometric path delays, making compressive strategic data acquisition impossible. In this paper, we present a novel phase-stable, near-common-path interferometer enabling high-throughput hyperspectral imaging based on strategic data acquisition. Our results suggest that this approach can improve throughput over those of many other wide-field spectral techniques by more than an order of magnitude without compromising phase stability. PMID:29392168
An improved high-throughput lipid extraction method for the analysis of human brain lipids.
Abbott, Sarah K; Jenner, Andrew M; Mitchell, Todd W; Brown, Simon H J; Halliday, Glenda M; Garner, Brett
2013-03-01
We have developed a protocol suitable for high-throughput lipidomic analysis of human brain samples. The traditional Folch extraction (using chloroform and glass-glass homogenization) was compared to a high-throughput method combining methyl-tert-butyl ether (MTBE) extraction with mechanical homogenization utilizing ceramic beads. This high-throughput method significantly reduced sample handling time and increased efficiency compared to glass-glass homogenizing. Furthermore, replacing chloroform with MTBE is safer (less carcinogenic/toxic), with lipids dissolving in the upper phase, allowing for easier pipetting and the potential for automation (i.e., robotics). Both methods were applied to the analysis of human occipital cortex. Lipid species (including ceramides, sphingomyelins, choline glycerophospholipids, ethanolamine glycerophospholipids and phosphatidylserines) were analyzed via electrospray ionization mass spectrometry and sterol species were analyzed using gas chromatography mass spectrometry. No differences in lipid species composition were evident when the lipid extraction protocols were compared, indicating that MTBE extraction with mechanical bead homogenization provides an improved method for the lipidomic profiling of human brain tissue.
Graph-based signal integration for high-throughput phenotyping
2012-01-01
Background Electronic Health Records aggregated in Clinical Data Warehouses (CDWs) promise to revolutionize Comparative Effectiveness Research and suggest new avenues of research. However, the effectiveness of CDWs is diminished by the lack of properly labeled data. We present a novel approach that integrates knowledge from the CDW, the biomedical literature, and the Unified Medical Language System (UMLS) to perform high-throughput phenotyping. In this paper, we automatically construct a graphical knowledge model and then use it to phenotype breast cancer patients. We compare the performance of this approach to using MetaMap when labeling records. Results MetaMap's overall accuracy at identifying breast cancer patients was 51.1% (n=428); recall=85.4%, precision=26.2%, and F1=40.1%. Our unsupervised graph-based high-throughput phenotyping had accuracy of 84.1%; recall=46.3%, precision=61.2%, and F1=52.8%. Conclusions We conclude that our approach is a promising alternative for unsupervised high-throughput phenotyping. PMID:23320851
High-throughput cloning and expression library creation for functional proteomics.
Festa, Fernanda; Steel, Jason; Bian, Xiaofang; Labaer, Joshua
2013-05-01
The study of protein function usually requires the use of a cloned version of the gene for protein expression and functional assays. This strategy is particularly important when the information available regarding function is limited. The functional characterization of the thousands of newly identified proteins revealed by genomics requires faster methods than traditional single-gene experiments, creating the need for fast, flexible, and reliable cloning systems. These collections of ORF clones can be coupled with high-throughput proteomics platforms, such as protein microarrays and cell-based assays, to answer biological questions. In this tutorial, we provide the background for DNA cloning, discuss the major high-throughput cloning systems (Gateway® Technology, Flexi® Vector Systems, and Creator(TM) DNA Cloning System) and compare them side-by-side. We also report an example of high-throughput cloning study and its application in functional proteomics. This tutorial is part of the International Proteomics Tutorial Programme (IPTP12). © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhu, Shiyou; Li, Wei; Liu, Jingze; Chen, Chen-Hao; Liao, Qi; Xu, Ping; Xu, Han; Xiao, Tengfei; Cao, Zhongzheng; Peng, Jingyu; Yuan, Pengfei; Brown, Myles; Liu, Xiaole Shirley; Wei, Wensheng
2017-01-01
CRISPR/Cas9 screens have been widely adopted to analyse coding gene functions, but high throughput screening of non-coding elements using this method is more challenging, because indels caused by a single cut in non-coding regions are unlikely to produce a functional knockout. A high-throughput method to produce deletions of non-coding DNA is needed. Herein, we report a high throughput genomic deletion strategy to screen for functional long non-coding RNAs (lncRNAs) that is based on a lentiviral paired-guide RNA (pgRNA) library. Applying our screening method, we identified 51 lncRNAs that can positively or negatively regulate human cancer cell growth. We individually validated 9 lncRNAs using CRISPR/Cas9-mediated genomic deletion and functional rescue, CRISPR activation or inhibition, and gene expression profiling. Our high-throughput pgRNA genome deletion method should enable rapid identification of functional mammalian non-coding elements. PMID:27798563
Choudhry, Priya
2016-01-01
Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849
High-throughput determination of structural phase diagram and constituent phases using GRENDEL
NASA Astrophysics Data System (ADS)
Kusne, A. G.; Keller, D.; Anderson, A.; Zaban, A.; Takeuchi, I.
2015-11-01
Advances in high-throughput materials fabrication and characterization techniques have resulted in faster rates of data collection and rapidly growing volumes of experimental data. To convert this mass of information into actionable knowledge of material process-structure-property relationships requires high-throughput data analysis techniques. This work explores the use of the Graph-based endmember extraction and labeling (GRENDEL) algorithm as a high-throughput method for analyzing structural data from combinatorial libraries, specifically, to determine phase diagrams and constituent phases from both x-ray diffraction and Raman spectral data. The GRENDEL algorithm utilizes a set of physical constraints to optimize results and provides a framework by which additional physics-based constraints can be easily incorporated. GRENDEL also permits the integration of database data as shown by the use of critically evaluated data from the Inorganic Crystal Structure Database in the x-ray diffraction data analysis. Also the Sunburst radial tree map is demonstrated as a tool to visualize material structure-property relationships found through graph based analysis.
High-throughput screening of filamentous fungi using nanoliter-range droplet-based microfluidics
NASA Astrophysics Data System (ADS)
Beneyton, Thomas; Wijaya, I. Putu Mahendra; Postros, Prexilia; Najah, Majdi; Leblond, Pascal; Couvent, Angélique; Mayot, Estelle; Griffiths, Andrew D.; Drevelle, Antoine
2016-06-01
Filamentous fungi are an extremely important source of industrial enzymes because of their capacity to secrete large quantities of proteins. Currently, functional screening of fungi is associated with low throughput and high costs, which severely limits the discovery of novel enzymatic activities and better production strains. Here, we describe a nanoliter-range droplet-based microfluidic system specially adapted for the high-throughput sceening (HTS) of large filamentous fungi libraries for secreted enzyme activities. The platform allowed (i) compartmentalization of single spores in ~10 nl droplets, (ii) germination and mycelium growth and (iii) high-throughput sorting of fungi based on enzymatic activity. A 104 clone UV-mutated library of Aspergillus niger was screened based on α-amylase activity in just 90 minutes. Active clones were enriched 196-fold after a single round of microfluidic HTS. The platform is a powerful tool for the development of new production strains with low cost, space and time footprint and should bring enormous benefit for improving the viability of biotechnological processes.
Lee, Hangyeore; Mun, Dong-Gi; Bae, Jingi; Kim, Hokeun; Oh, Se Yeon; Park, Young Soo; Lee, Jae-Hyuk; Lee, Sang-Won
2015-08-21
We report a new and simple design of a fully automated dual-online ultra-high pressure liquid chromatography system. The system employs only two nano-volume switching valves (a two-position four port valve and a two-position ten port valve) that direct solvent flows from two binary nano-pumps for parallel operation of two analytical columns and two solid phase extraction (SPE) columns. Despite the simple design, the sDO-UHPLC offers many advantageous features that include high duty cycle, back flushing sample injection for fast and narrow zone sample injection, online desalting, high separation resolution and high intra/inter-column reproducibility. This system was applied to analyze proteome samples not only in high throughput deep proteome profiling experiments but also in high throughput MRM experiments.
Optimization and high-throughput screening of antimicrobial peptides.
Blondelle, Sylvie E; Lohner, Karl
2010-01-01
While a well-established process for lead compound discovery in for-profit companies, high-throughput screening is becoming more popular in basic and applied research settings in academia. The development of combinatorial libraries combined with easy and less expensive access to new technologies have greatly contributed to the implementation of high-throughput screening in academic laboratories. While such techniques were earlier applied to simple assays involving single targets or based on binding affinity, they have now been extended to more complex systems such as whole cell-based assays. In particular, the urgent need for new antimicrobial compounds that would overcome the rapid rise of drug-resistant microorganisms, where multiple target assays or cell-based assays are often required, has forced scientists to focus onto high-throughput technologies. Based on their existence in natural host defense systems and their different mode of action relative to commercial antibiotics, antimicrobial peptides represent a new hope in discovering novel antibiotics against multi-resistant bacteria. The ease of generating peptide libraries in different formats has allowed a rapid adaptation of high-throughput assays to the search for novel antimicrobial peptides. Similarly, the availability nowadays of high-quantity and high-quality antimicrobial peptide data has permitted the development of predictive algorithms to facilitate the optimization process. This review summarizes the various library formats that lead to de novo antimicrobial peptide sequences as well as the latest structural knowledge and optimization processes aimed at improving the peptides selectivity.
Multi-step high-throughput conjugation platform for the development of antibody-drug conjugates.
Andris, Sebastian; Wendeler, Michaela; Wang, Xiangyang; Hubbuch, Jürgen
2018-07-20
Antibody-drug conjugates (ADCs) form a rapidly growing class of biopharmaceuticals which attracts a lot of attention throughout the industry due to its high potential for cancer therapy. They combine the specificity of a monoclonal antibody (mAb) and the cell-killing capacity of highly cytotoxic small molecule drugs. Site-specific conjugation approaches involve a multi-step process for covalent linkage of antibody and drug via a linker. Despite the range of parameters that have to be investigated, high-throughput methods are scarcely used so far in ADC development. In this work an automated high-throughput platform for a site-specific multi-step conjugation process on a liquid-handling station is presented by use of a model conjugation system. A high-throughput solid-phase buffer exchange was successfully incorporated for reagent removal by utilization of a batch cation exchange step. To ensure accurate screening of conjugation parameters, an intermediate UV/Vis-based concentration determination was established including feedback to the process. For conjugate characterization, a high-throughput compatible reversed-phase chromatography method with a runtime of 7 min and no sample preparation was developed. Two case studies illustrate the efficient use for mapping the operating space of a conjugation process. Due to the degree of automation and parallelization, the platform is capable of significantly reducing process development efforts and material demands and shorten development timelines for antibody-drug conjugates. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yan, Zongkai; Zhang, Xiaokun; Li, Guang; Cui, Yuxing; Jiang, Zhaolian; Liu, Wen; Peng, Zhi; Xiang, Yong
2018-01-01
The conventional methods for designing and preparing thin film based on wet process remain a challenge due to disadvantages such as time-consuming and ineffective, which hinders the development of novel materials. Herein, we present a high-throughput combinatorial technique for continuous thin film preparation relied on chemical bath deposition (CBD). The method is ideally used to prepare high-throughput combinatorial material library with low decomposition temperatures and high water- or oxygen-sensitivity at relatively high-temperature. To check this system, a Cu(In, Ga)Se (CIGS) thin films library doped with 0-19.04 at.% of antimony (Sb) was taken as an example to evaluate the regulation of varying Sb doping concentration on the grain growth, structure, morphology and electrical properties of CIGS thin film systemically. Combined with the Energy Dispersive Spectrometer (EDS), X-ray Photoelectron Spectroscopy (XPS), automated X-ray Diffraction (XRD) for rapid screening and Localized Electrochemical Impedance Spectroscopy (LEIS), it was confirmed that this combinatorial high-throughput system could be used to identify the composition with the optimal grain orientation growth, microstructure and electrical properties systematically, through accurately monitoring the doping content and material composition. According to the characterization results, a Sb2Se3 quasi-liquid phase promoted CIGS film-growth model has been put forward. In addition to CIGS thin film reported here, the combinatorial CBD also could be applied to the high-throughput screening of other sulfide thin film material systems.
Fu, Wei; Zhu, Pengyu; Wei, Shuang; Zhixin, Du; Wang, Chenguang; Wu, Xiyang; Li, Feiwu; Zhu, Shuifang
2017-04-01
Among all of the high-throughput detection methods, PCR-based methodologies are regarded as the most cost-efficient and feasible methodologies compared with the next-generation sequencing or ChIP-based methods. However, the PCR-based methods can only achieve multiplex detection up to 15-plex due to limitations imposed by the multiplex primer interactions. The detection throughput cannot meet the demands of high-throughput detection, such as SNP or gene expression analysis. Therefore, in our study, we have developed a new high-throughput PCR-based detection method, multiplex enrichment quantitative PCR (ME-qPCR), which is a combination of qPCR and nested PCR. The GMO content detection results in our study showed that ME-qPCR could achieve high-throughput detection up to 26-plex. Compared to the original qPCR, the Ct values of ME-qPCR were lower for the same group, which showed that ME-qPCR sensitivity is higher than the original qPCR. The absolute limit of detection for ME-qPCR could achieve levels as low as a single copy of the plant genome. Moreover, the specificity results showed that no cross-amplification occurred for irrelevant GMO events. After evaluation of all of the parameters, a practical evaluation was performed with different foods. The more stable amplification results, compared to qPCR, showed that ME-qPCR was suitable for GMO detection in foods. In conclusion, ME-qPCR achieved sensitive, high-throughput GMO detection in complex substrates, such as crops or food samples. In the future, ME-qPCR-based GMO content identification may positively impact SNP analysis or multiplex gene expression of food or agricultural samples. Graphical abstract For the first-step amplification, four primers (A, B, C, and D) have been added into the reaction volume. In this manner, four kinds of amplicons have been generated. All of these four amplicons could be regarded as the target of second-step PCR. For the second-step amplification, three parallels have been taken for the final evaluation. After the second evaluation, the final amplification curves and melting curves have been achieved.
High Throughput Transcriptomics @ USEPA (Toxicology ...
The ideal chemical testing approach will provide complete coverage of all relevant toxicological responses. It should be sensitive and specific It should identify the mechanism/mode-of-action (with dose-dependence). It should identify responses relevant to the species of interest. Responses should ideally be translated into tissue-, organ-, and organism-level effects. It must be economical and scalable. Using a High Throughput Transcriptomics platform within US EPA provides broader coverage of biological activity space and toxicological MOAs and helps fill the toxicological data gap. Slide presentation at the 2016 ToxForum on using High Throughput Transcriptomics at US EPA for broader coverage biological activity space and toxicological MOAs.
Mobile element biology – new possibilities with high-throughput sequencing
Xing, Jinchuan; Witherspoon, David J.; Jorde, Lynn B.
2014-01-01
Mobile elements compose more than half of the human genome, but until recently their large-scale detection was time-consuming and challenging. With the development of new high-throughput sequencing technologies, the complete spectrum of mobile element variation in humans can now be identified and analyzed. Thousands of new mobile element insertions have been discovered, yielding new insights into mobile element biology, evolution, and genomic variation. We review several high-throughput methods, with an emphasis on techniques that specifically target mobile element insertions in humans, and we highlight recent applications of these methods in evolutionary studies and in the analysis of somatic alterations in human cancers. PMID:23312846
Advances in high throughput DNA sequence data compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz
2016-06-01
Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.
Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome
2014-04-25
In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.
LOCATE: a mouse protein subcellular localization database
Fink, J. Lynn; Aturaliya, Rajith N.; Davis, Melissa J.; Zhang, Fasheng; Hanson, Kelly; Teasdale, Melvena S.; Kai, Chikatoshi; Kawai, Jun; Carninci, Piero; Hayashizaki, Yoshihide; Teasdale, Rohan D.
2006-01-01
We present here LOCATE, a curated, web-accessible database that houses data describing the membrane organization and subcellular localization of proteins from the FANTOM3 Isoform Protein Sequence set. Membrane organization is predicted by the high-throughput, computational pipeline MemO. The subcellular locations of selected proteins from this set were determined by a high-throughput, immunofluorescence-based assay and by manually reviewing >1700 peer-reviewed publications. LOCATE represents the first effort to catalogue the experimentally verified subcellular location and membrane organization of mammalian proteins using a high-throughput approach and provides localization data for ∼40% of the mouse proteome. It is available at . PMID:16381849
USDA-ARS?s Scientific Manuscript database
Extraction of DNA from tissue samples can be expensive both in time and monetary resources and can often require handling and disposal of hazardous chemicals. We have developed a high throughput protocol for extracting DNA from honey bees that is of a high enough quality and quantity to enable hundr...
NASA Astrophysics Data System (ADS)
Green, Martin L.; Takeuchi, Ichiro; Hattrick-Simpers, Jason R.
2013-06-01
High throughput (combinatorial) materials science methodology is a relatively new research paradigm that offers the promise of rapid and efficient materials screening, optimization, and discovery. The paradigm started in the pharmaceutical industry but was rapidly adopted to accelerate materials research in a wide variety of areas. High throughput experiments are characterized by synthesis of a "library" sample that contains the materials variation of interest (typically composition), and rapid and localized measurement schemes that result in massive data sets. Because the data are collected at the same time on the same "library" sample, they can be highly uniform with respect to fixed processing parameters. This article critically reviews the literature pertaining to applications of combinatorial materials science for electronic, magnetic, optical, and energy-related materials. It is expected that high throughput methodologies will facilitate commercialization of novel materials for these critically important applications. Despite the overwhelming evidence presented in this paper that high throughput studies can effectively inform commercial practice, in our perception, it remains an underutilized research and development tool. Part of this perception may be due to the inaccessibility of proprietary industrial research and development practices, but clearly the initial cost and availability of high throughput laboratory equipment plays a role. Combinatorial materials science has traditionally been focused on materials discovery, screening, and optimization to combat the extremely high cost and long development times for new materials and their introduction into commerce. Going forward, combinatorial materials science will also be driven by other needs such as materials substitution and experimental verification of materials properties predicted by modeling and simulation, which have recently received much attention with the advent of the Materials Genome Initiative. Thus, the challenge for combinatorial methodology will be the effective coupling of synthesis, characterization and theory, and the ability to rapidly manage large amounts of data in a variety of formats.
Toxicokinetics (TK) provides a bridge between toxicity and exposure assessment by predicting tissue concentrations due to exposure, however traditional TK methods are resource intensive. Relatively high throughput TK (HTTK) methods have been used by the pharmaceutical industry to...
Under the ExpoCast program, United States Environmental Protection Agency (EPA) researchers have developed a high-throughput (HT) framework for estimating aggregate exposures to chemicals from multiple pathways to support rapid prioritization of chemicals. Here, we present method...
Environmental surveillance and monitoring. The next frontiers for high-throughput toxicology
High throughput toxicity testing (HTT) technologies along with the world-wide web are revolutionizing both generation and access to data regarding the bioactivities that chemicals can elicit when they interact with specific proteins, genes, or other targets in the body of an orga...
High-Throughput Models for Exposure-Based Chemical Prioritization in the ExpoCast Project
The United States Environmental Protection Agency (U.S. EPA) must characterize potential risks to human health and the environment associated with manufacture and use of thousands of chemicals. High-throughput screening (HTS) for biological activity allows the ToxCast research pr...
High-Throughput Exposure Potential Prioritization for ToxCast Chemicals
The U.S. EPA must consider lists of hundreds to thousands of chemicals when prioritizing research resources in order to identify risk to human populations and the environment. High-throughput assays to identify biological activity in vitro have allowed the ToxCastTM program to i...
Use of High-Throughput Testing and Approaches for Evaluating Chemical Risk-Relevance to Humans
ToxCast is profiling the bioactivity of thousands of chemicals based on high-throughput screening (HTS) and computational models that integrate knowledge of biological systems and in vivo toxicities. Many of these assays probe signaling pathways and cellular processes critical to...
High-Throughput Simulation of Environmental Chemical Fate for Exposure Prioritization
The U.S. EPA must consider lists of hundreds to thousands of chemicals when allocating resources to identify risk in human populations and the environment. High-throughput screening assays to characterize biological activity in vitro have allowed the ToxCastTM program to identify...
Notredame, Cedric
2018-05-02
Cedric Notredame from the Centre for Genomic Regulation gives a presentation on New Challenges of the Computation of Multiple Sequence Alignments in the High-Throughput Era at the JGI/Argonne HPC Workshop on January 26, 2010.
USDA-ARS?s Scientific Manuscript database
Contigs with sequence similarities to several nucleorhabdoviruses were identified by high-throughput sequencing analysis from a black currant (Ribes nigrum L.) cultivar. The complete genomic sequence of this new nucleorhabdovirus is 14,432 nucleotides. Its genomic organization is typical of nucleorh...
Estimating Toxicity Pathway Activating Doses for High Throughput Chemical Risk Assessments
Estimating a Toxicity Pathway Activating Dose (TPAD) from in vitro assays as an analog to a reference dose (RfD) derived from in vivo toxicity tests would facilitate high throughput risk assessments of thousands of data-poor environmental chemicals. Estimating a TPAD requires def...
Momentum is growing worldwide to use in vitro high-throughput screening (HTS) to evaluate human health effects of chemicals. However, the integration of dosimetry into HTS assays and incorporation of population variability will be essential before its application in a risk assess...
The US EPA’s ToxCast program is designed to assess chemical perturbations of molecular and cellular endpoints using a variety of high-throughput screening (HTS) assays. However, existing HTS assays have limited or no xenobiotic metabolism which could lead to a mischaracterization...
Defining the taxonomic domain of applicability for mammalian-based high-throughput screening assays
Cell-based high throughput screening (HTS) technologies are becoming mainstream in chemical safety evaluations. The US Environmental Protection Agency (EPA) Toxicity Forecaster (ToxCastTM) and the multi-agency Tox21 Programs have been at the forefront in advancing this science, m...
Athavale, Ajay
2018-01-04
Ajay Athavale (Monsanto) presents "High Throughput Plasmid Sequencing with Illumina and CLC Bio" at the 7th Annual Sequencing, Finishing, Analysis in the Future (SFAF) Meeting held in June, 2012 in Santa Fe, NM.
The Impact of Data Fragmentation on High-Throughput Clinical Phenotyping
ERIC Educational Resources Information Center
Wei, Weiqi
2012-01-01
Subject selection is essential and has become the rate-limiting step for harvesting knowledge to advance healthcare through clinical research. Present manual approaches inhibit researchers from conducting deep and broad studies and drawing confident conclusions. High-throughput clinical phenotyping (HTCP), a recently proposed approach, leverages…
High-Throughput Cancer Cell Sphere Formation for 3D Cell Culture.
Chen, Yu-Chih; Yoon, Euisik
2017-01-01
Three-dimensional (3D) cell culture is critical in studying cancer pathology and drug response. Though 3D cancer sphere culture can be performed in low-adherent dishes or well plates, the unregulated cell aggregation may skew the results. On contrary, microfluidic 3D culture can allow precise control of cell microenvironments, and provide higher throughput by orders of magnitude. In this chapter, we will look into engineering innovations in a microfluidic platform for high-throughput cancer cell sphere formation and review the implementation methods in detail.
Droplet microfluidic technology for single-cell high-throughput screening.
Brouzes, Eric; Medkova, Martina; Savenelli, Neal; Marran, Dave; Twardowski, Mariusz; Hutchison, J Brian; Rothberg, Jonathan M; Link, Darren R; Perrimon, Norbert; Samuels, Michael L
2009-08-25
We present a droplet-based microfluidic technology that enables high-throughput screening of single mammalian cells. This integrated platform allows for the encapsulation of single cells and reagents in independent aqueous microdroplets (1 pL to 10 nL volumes) dispersed in an immiscible carrier oil and enables the digital manipulation of these reactors at a very high-throughput. Here, we validate a full droplet screening workflow by conducting a droplet-based cytotoxicity screen. To perform this screen, we first developed a droplet viability assay that permits the quantitative scoring of cell viability and growth within intact droplets. Next, we demonstrated the high viability of encapsulated human monocytic U937 cells over a period of 4 days. Finally, we developed an optically-coded droplet library enabling the identification of the droplets composition during the assay read-out. Using the integrated droplet technology, we screened a drug library for its cytotoxic effect against U937 cells. Taken together our droplet microfluidic platform is modular, robust, uses no moving parts, and has a wide range of potential applications including high-throughput single-cell analyses, combinatorial screening, and facilitating small sample analyses.
Microfluidic guillotine for single-cell wound repair studies
NASA Astrophysics Data System (ADS)
Blauch, Lucas R.; Gai, Ya; Khor, Jian Wei; Sood, Pranidhi; Marshall, Wallace F.; Tang, Sindy K. Y.
2017-07-01
Wound repair is a key feature distinguishing living from nonliving matter. Single cells are increasingly recognized to be capable of healing wounds. The lack of reproducible, high-throughput wounding methods has hindered single-cell wound repair studies. This work describes a microfluidic guillotine for bisecting single Stentor coeruleus cells in a continuous-flow manner. Stentor is used as a model due to its robust repair capacity and the ability to perform gene knockdown in a high-throughput manner. Local cutting dynamics reveals two regimes under which cells are bisected, one at low viscous stress where cells are cut with small membrane ruptures and high viability and one at high viscous stress where cells are cut with extended membrane ruptures and decreased viability. A cutting throughput up to 64 cells per minute—more than 200 times faster than current methods—is achieved. The method allows the generation of more than 100 cells in a synchronized stage of their repair process. This capacity, combined with high-throughput gene knockdown in Stentor, enables time-course mechanistic studies impossible with current wounding methods.
CrossCheck: an open-source web tool for high-throughput screen data analysis.
Najafov, Jamil; Najafov, Ayaz
2017-07-19
Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.
Ion channel drug discovery and research: the automated Nano-Patch-Clamp technology.
Brueggemann, A; George, M; Klau, M; Beckler, M; Steindl, J; Behrends, J C; Fertig, N
2004-01-01
Unlike the genomics revolution, which was largely enabled by a single technological advance (high throughput sequencing), rapid advancement in proteomics will require a broader effort to increase the throughput of a number of key tools for functional analysis of different types of proteins. In the case of ion channels -a class of (membrane) proteins of great physiological importance and potential as drug targets- the lack of adequate assay technologies is felt particularly strongly. The available, indirect, high throughput screening methods for ion channels clearly generate insufficient information. The best technology to study ion channel function and screen for compound interaction is the patch clamp technique, but patch clamping suffers from low throughput, which is not acceptable for drug screening. A first step towards a solution is presented here. The nano patch clamp technology, which is based on a planar, microstructured glass chip, enables automatic whole cell patch clamp measurements. The Port-a-Patch is an automated electrophysiology workstation, which uses planar patch clamp chips. This approach enables high quality and high content ion channel and compound evaluation on a one-cell-at-a-time basis. The presented automation of the patch process and its scalability to an array format are the prerequisites for any higher throughput electrophysiology instruments.
Exploiting Helminth-Host Interactomes through Big Data.
Sotillo, Javier; Toledo, Rafael; Mulvenna, Jason; Loukas, Alex
2017-11-01
Helminths facilitate their parasitic existence through the production and secretion of different molecules, including proteins. Some helminth proteins can manipulate the host's immune system, a phenomenon that is now being exploited with a view to developing therapeutics for inflammatory diseases. In recent years, hundreds of helminth genomes have been sequenced, but as a community we are still taking baby steps when it comes to identifying proteins that govern host-helminth interactions. The information generated from genomic, immunomic, and proteomic studies, as well as from cutting-edge approaches such as proteogenomics, is leading to a substantial volume of big data that can be utilised to shed light on fundamental biology and provide solutions for the development of bioactive-molecule-based therapeutics. Copyright © 2017 Elsevier Ltd. All rights reserved.
High-Reflectivity Coatings for a Vacuum Ultraviolet Spectropolarimeter
NASA Astrophysics Data System (ADS)
Narukage, Noriyuki; Kubo, Masahito; Ishikawa, Ryohko; Ishikawa, Shin-nosuke; Katsukawa, Yukio; Kobiki, Toshihiko; Giono, Gabriel; Kano, Ryouhei; Bando, Takamasa; Tsuneta, Saku; Auchère, Frédéric; Kobayashi, Ken; Winebarger, Amy; McCandless, Jim; Chen, Jianrong; Choi, Joanne
2017-03-01
Precise polarization measurements in the vacuum ultraviolet (VUV) region are expected to be a new tool for inferring the magnetic fields in the upper atmosphere of the Sun. High-reflectivity coatings are key elements to achieving high-throughput optics for precise polarization measurements. We fabricated three types of high-reflectivity coatings for a solar spectropolarimeter in the hydrogen Lyman-α (Lyα; 121.567 nm) region and evaluated their performance. The first high-reflectivity mirror coating offers a reflectivity of more than 80 % in Lyα optics. The second is a reflective narrow-band filter coating that has a peak reflectivity of 57 % in Lyα, whereas its reflectivity in the visible light range is lower than 1/10 of the peak reflectivity (˜ 5 % on average). This coating can be used to easily realize a visible light rejection system, which is indispensable for a solar telescope, while maintaining high throughput in the Lyα line. The third is a high-efficiency reflective polarizing coating that almost exclusively reflects an s-polarized beam at its Brewster angle of 68° with a reflectivity of 55 %. This coating achieves both high polarizing power and high throughput. These coatings contributed to the high-throughput solar VUV spectropolarimeter called the Chromospheric Lyman-Alpha SpectroPolarimeter (CLASP), which was launched on 3 September, 2015.
A simple and sensitive high-throughput GFP screening in woody and herbaceous plants.
Hily, Jean-Michel; Liu, Zongrang
2009-03-01
Green fluorescent protein (GFP) has been used widely as a powerful bioluminescent reporter, but its visualization by existing methods in tissues or whole plants and its utilization for high-throughput screening remains challenging in many species. Here, we report a fluorescence image analyzer-based method for GFP detection and its utility for high-throughput screening of transformed plants. Of three detection methods tested, the Typhoon fluorescence scanner was able to detect GFP fluorescence in all Arabidopsis thaliana tissues and apple leaves, while regular fluorescence microscopy detected it only in Arabidopsis flowers and siliques but barely in the leaves of either Arabidopsis or apple. The hand-held UV illumination method failed in all tissues of both species. Additionally, the Typhoon imager was able to detect GFP fluorescence in both green and non-green tissues of Arabidopsis seedlings as well as in imbibed seeds, qualifying it as a high-throughput screening tool, which was further demonstrated by screening the seedlings of primary transformed T(0) seeds. Of the 30,000 germinating Arabidopsis seedlings screened, at least 69 GFP-positive lines were identified, accounting for an approximately 0.23% transformation efficiency. About 14,000 seedlings grown in 16 Petri plates could be screened within an hour, making the screening process significantly more efficient and robust than any other existing high-throughput screening method for transgenic plants.
Bláha, Benjamin A F; Morris, Stephen A; Ogonah, Olotu W; Maucourant, Sophie; Crescente, Vincenzo; Rosenberg, William; Mukhopadhyay, Tarit K
2018-01-01
The time and cost benefits of miniaturized fermentation platforms can only be gained by employing complementary techniques facilitating high-throughput at small sample volumes. Microbial cell disruption is a major bottleneck in experimental throughput and is often restricted to large processing volumes. Moreover, for rigid yeast species, such as Pichia pastoris, no effective high-throughput disruption methods exist. The development of an automated, miniaturized, high-throughput, noncontact, scalable platform based on adaptive focused acoustics (AFA) to disrupt P. pastoris and recover intracellular heterologous protein is described. Augmented modes of AFA were established by investigating vessel designs and a novel enzymatic pretreatment step. Three different modes of AFA were studied and compared to the performance high-pressure homogenization. For each of these modes of cell disruption, response models were developed to account for five different performance criteria. Using multiple responses not only demonstrated that different operating parameters are required for different response optima, with highest product purity requiring suboptimal values for other criteria, but also allowed for AFA-based methods to mimic large-scale homogenization processes. These results demonstrate that AFA-mediated cell disruption can be used for a wide range of applications including buffer development, strain selection, fermentation process development, and whole bioprocess integration. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 34:130-140, 2018. © 2017 American Institute of Chemical Engineers.
Alterman, Julia F; Coles, Andrew H; Hall, Lauren M; Aronin, Neil; Khvorova, Anastasia; Didiot, Marie-Cécile
2017-08-20
Primary neurons represent an ideal cellular system for the identification of therapeutic oligonucleotides for the treatment of neurodegenerative diseases. However, due to the sensitive nature of primary cells, the transfection of small interfering RNAs (siRNA) using classical methods is laborious and often shows low efficiency. Recent progress in oligonucleotide chemistry has enabled the development of stabilized and hydrophobically modified small interfering RNAs (hsiRNAs). This new class of oligonucleotide therapeutics shows extremely efficient self-delivery properties and supports potent and durable effects in vitro and in vivo . We have developed a high-throughput in vitro assay to identify and test hsiRNAs in primary neuronal cultures. To simply, rapidly, and accurately quantify the mRNA silencing of hundreds of hsiRNAs, we use the QuantiGene 2.0 quantitative gene expression assay. This high-throughput, 96-well plate-based assay can quantify mRNA levels directly from sample lysate. Here, we describe a method to prepare short-term cultures of mouse primary cortical neurons in a 96-well plate format for high-throughput testing of oligonucleotide therapeutics. This method supports the testing of hsiRNA libraries and the identification of potential therapeutics within just two weeks. We detail methodologies of our high throughput assay workflow from primary neuron preparation to data analysis. This method can help identify oligonucleotide therapeutics for treatment of various neurological diseases.
Yoshii, Yukie; Furukawa, Takako; Waki, Atsuo; Okuyama, Hiroaki; Inoue, Masahiro; Itoh, Manabu; Zhang, Ming-Rong; Wakizaka, Hidekatsu; Sogawa, Chizuru; Kiyono, Yasushi; Yoshii, Hiroshi; Fujibayashi, Yasuhisa; Saga, Tsuneo
2015-05-01
Anti-cancer drug development typically utilizes high-throughput screening with two-dimensional (2D) cell culture. However, 2D culture induces cellular characteristics different from tumors in vivo, resulting in inefficient drug development. Here, we report an innovative high-throughput screening system using nanoimprinting 3D culture to simulate in vivo conditions, thereby facilitating efficient drug development. We demonstrated that cell line-based nanoimprinting 3D screening can more efficiently select drugs that effectively inhibit cancer growth in vivo as compared to 2D culture. Metabolic responses after treatment were assessed using positron emission tomography (PET) probes, and revealed similar characteristics between the 3D spheroids and in vivo tumors. Further, we developed an advanced method to adopt cancer cells from patient tumor tissues for high-throughput drug screening with nanoimprinting 3D culture, which we termed Cancer tissue-Originated Uniformed Spheroid Assay (COUSA). This system identified drugs that were effective in xenografts of the original patient tumors. Nanoimprinting 3D spheroids showed low permeability and formation of hypoxic regions inside, similar to in vivo tumors. Collectively, the nanoimprinting 3D culture provides easy-handling high-throughput drug screening system, which allows for efficient drug development by mimicking the tumor environment. The COUSA system could be a useful platform for drug development with patient cancer cells. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Multidisciplinary Approach to High Throughput Nuclear Magnetic Resonance Spectroscopy
Pourmodheji, Hossein; Ghafar-Zadeh, Ebrahim; Magierowski, Sebastian
2016-01-01
Nuclear Magnetic Resonance (NMR) is a non-contact, powerful structure-elucidation technique for biochemical analysis. NMR spectroscopy is used extensively in a variety of life science applications including drug discovery. However, existing NMR technology is limited in that it cannot run a large number of experiments simultaneously in one unit. Recent advances in micro-fabrication technologies have attracted the attention of researchers to overcome these limitations and significantly accelerate the drug discovery process by developing the next generation of high-throughput NMR spectrometers using Complementary Metal Oxide Semiconductor (CMOS). In this paper, we examine this paradigm shift and explore new design strategies for the development of the next generation of high-throughput NMR spectrometers using CMOS technology. A CMOS NMR system consists of an array of high sensitivity micro-coils integrated with interfacing radio-frequency circuits on the same chip. Herein, we first discuss the key challenges and recent advances in the field of CMOS NMR technology, and then a new design strategy is put forward for the design and implementation of highly sensitive and high-throughput CMOS NMR spectrometers. We thereafter discuss the functionality and applicability of the proposed techniques by demonstrating the results. For microelectronic researchers starting to work in the field of CMOS NMR technology, this paper serves as a tutorial with comprehensive review of state-of-the-art technologies and their performance levels. Based on these levels, the CMOS NMR approach offers unique advantages for high resolution, time-sensitive and high-throughput bimolecular analysis required in a variety of life science applications including drug discovery. PMID:27294925
Break-up of droplets in a concentrated emulsion flowing through a narrow constriction
NASA Astrophysics Data System (ADS)
Kim, Minkyu; Rosenfeld, Liat; Tang, Sindy; Tang Lab Team
2014-11-01
Droplet microfluidics has enabled a wide range of high throughput screening applications. Compared with other technologies such as robotic screening technology, droplet microfluidics has 1000 times higher throughput, which makes the technology one of the most promising platforms for the ultrahigh throughput screening applications. Few studies have considered the throughput of the droplet interrogation process, however. In this research, we show that the probability of break-up increases with increasing flow rate, entrance angle to the constriction, and size of the drops. Since single drops do not break at the highest flow rate used in the system, break-ups occur primarily from the interactions between highly packed droplets close to each other. Moreover, the probabilistic nature of the break-up process arises from the stochastic variations in the packing configuration. Our results can be used to calculate the maximum throughput of the serial interrogation process. For 40 pL-drops, the highest throughput with less than 1% droplet break-up was measured to be approximately 7,000 drops per second. In addition, the results are useful for understanding the behavior of concentrated emulsions in applications such as mobility control in enhanced oil recovery.
In response to a proposed vision and strategy for toxicity testing in the 21st century nascent high throughput toxicology (HTT) programs have tested thousands of chemicals in hundreds of pathway-based biological assays. Although, to date, use of HTT data for safety assessment of ...
Molecular characterization of a novel Luteovirus from peach identified by high-throughput sequencing
USDA-ARS?s Scientific Manuscript database
Contigs with sequence homologies to Cherry-associated luteovirus were identified by high-throughput sequencing analysis of two peach accessions undergoing quarantine testing. The complete genomic sequences of the two isolates of this virus are 5,819 and 5,814 nucleotides. Their genome organization i...
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
Toxicokinetics (TK) provides a bridge between toxicity and exposure assessment by predicting tissue concentrations due to exposure. However traditional TK methods are resource intensive. Relatively high throughput TK (HTTK) methods have been used by the pharmaceutical industry to...
Results from rodent and non-rodent prenatal developmental toxicity tests for over 300 chemicals have been curated into the relational database ToxRefDB. These same chemicals have been run in concentration-response format through over 500 high-throughput screening assays assessin...
Discovery of viruses and virus-like pathogens in pistachio using high-throughput sequencing
USDA-ARS?s Scientific Manuscript database
Pistachio (Pistacia vera L.) trees from the National Clonal Germplasm Repository (NCGR) and orchards in California were surveyed for viruses and virus-like agents by high-throughput sequencing (HTS). Analyses of 60 trees including clonal UCB-1 hybrid rootstock (P. atlantica × P. integerrima) identif...
SeqAPASS to evaluate conservation of high-throughput screening targets across non-mammalian species
Cell-based high-throughput screening (HTS) and computational technologies are being applied as tools for toxicity testing in the 21st century. The U.S. Environmental Protection Agency (EPA) embraced these technologies and created the ToxCast Program in 2007, which has served as a...
20180312 - Applying a High-Throughput PBTK Model for IVIVE (SOT)
The ability to link in vitro and in vivo toxicity enables the use of high-throughput in vitro assays as an alternative to resource intensive animal studies. Toxicokinetics (TK) should help describe this link, but prior work found weak correlation when using a TK model for in vitr...
Disruption of steroidogenesis by environmental chemicals can result in altered hormone levels causing adverse reproductive and developmental effects. A high-throughput assay using H295R human adrenocortical carcinoma cells was used to evaluate the effect of 2,060 chemical samples...
The U.S. EPA must consider thousands of chemicals when allocating resources to assess risk in human populations and the environment. High-throughput screening assays to characterize biological activity in vitro are being implemented in the ToxCastTM program to rapidly characteri...
Application of the Adverse Outcome Pathway (AOP) framework and high throughput toxicity testing in chemical-specific risk assessment requires reconciliation of chemical concentrations sufficient to trigger a molecular initiating event measured in vitro and at the relevant target ...
Evaluation of food-relevant chemicals in the ToxCast high-throughput screening program
There are thousands of chemicals that are directly added to or come in contact with food, many of which have undergone little to no toxicological evaluation. The ToxCast high-throughput screening (HTS) program has evaluated over 1,800 chemicals in concentration-response across ~8...
In vitro based assays are used to identify potential endocrine disrupting chemicals. Thyroperoxidase (TPO), an enzyme essential for thyroid hormone (TH) synthesis, is a target site for disruption of the thyroid axis for which a high-throughput screening (HTPS) assay has recently ...
We previously integrated dosimetry and exposure with high-throughput screening (HTS) to enhance the utility of ToxCast™ HTS data by translating in vitro bioactivity concentrations to oral equivalent doses (OEDs) required to achieve these levels internally. These OEDs were compare...
An Evaluation of 25 Selected ToxCast Chemicals in Medium-Throughput Assays to Detect Genotoxicity
ABSTRACTToxCast is a multi-year effort to develop a cost-effective approach for the US EPA to prioritize chemicals for toxicity testing. Initial evaluation of more than 500 high-throughput (HT) microwell-based assays without metabolic activation showed that most lacked high speci...
High Throughput Assays for Exposure Science (NIEHS OHAT Staff Meeting presentation)
High throughput screening (HTS) data that characterize chemically induced biological activity have been generated for thousands of chemicals by the US interagency Tox21 and the US EPA ToxCast programs. In many cases there are no data available for comparing bioactivity from HTS w...
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
The U.S. Environmental Protection Agency’s ToxCast program has screened thousands of chemicals for biological activity, primarily using high-throughput in vitro bioassays. Adverse outcome pathways (AOPs) offer a means to link pathway-specific biological activities with potential ...
“httk”: EPA’s Tool for High Throughput Toxicokinetics (CompTox CoP)
Thousands of chemicals have been pro?led by high-throughput screening programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics. Toxicokinetic models aid in predicting tissue concentr...
High-throughput screening (HTS) for potential thyroid–disrupting chemicals requires a system of assays to capture multiple molecular-initiating events (MIEs) that converge on perturbed thyroid hormone (TH) homeostasis. Screening for MIEs specific to TH-disrupting pathways is limi...
Perspectives on Validation of High-Throughput Assays Supporting 21st Century Toxicity Testing
In vitro high-throughput screening (HTS) assays are seeing increasing use in toxicity testing. HTS assays can simultaneously test many chemicals but have seen limited use in the regulatory arena, in part because of the need to undergo rigorous, time-consuming formal validation. ...
The toxicity-testing paradigm has evolved to include high-throughput (HT) methods for addressing the increasing need to screen hundreds to thousands of chemicals rapidly. Approaches that involve in vitro screening assays, in silico predictions of exposure concentrations, and phar...
Predictive Model of Rat Reproductive Toxicity from ToxCast High Throughput Screening
The EPA ToxCast research program uses high throughput screening for bioactivity profiling and predicting the toxicity of large numbers of chemicals. ToxCast Phase‐I tested 309 well‐characterized chemicals in over 500 assays for a wide range of molecular targets and cellular respo...
Applying a High-Throughput PBTK Model for IVIVE
The ability to link in vitro and in vivo toxicity enables the use of high-throughput in vitro assays as an alternative to resource intensive animal studies. Toxicokinetics (TK) should help describe this link, but prior work found weak correlation when using a TK model for in vitr...
High-throughput genotyping of hop (Humulus lupulus L.) utilising diversity arrays technology (DArT)
USDA-ARS?s Scientific Manuscript database
Implementation of molecular methods in hop breeding is dependent on the availability of sizeable numbers of polymorphic markers and a comprehensive understanding of genetic variation. Diversity Arrays Technology (DArT) is a high-throughput cost-effective method for the discovery of large numbers of...
In vitro, high-throughput approaches have been widely recommended as an approach to screen chemicals for the potential to cause developmental neurotoxicity and prioritize them for additional testing. The choice of cellular models for such an approach will have important ramificat...
High-throughput exposure modeling to support prioritization of chemicals in personal care products
We demonstrate the application of a high-throughput modeling framework to estimate exposure to chemicals used in personal care products (PCPs). As a basis for estimating exposure, we use the product intake fraction (PiF), defined as the mass of chemical taken by an individual or ...
AbstractHigh-throughput methods are useful for rapidly screening large numbers of chemicals for biological activity, including the perturbation of pathways that may lead to adverse cellular effects. In vitro assays for the key events of neurodevelopment, including apoptosis, may ...
One use of alternative methods is to target animal use at only those chemicals and tests that are absolutely necessary. We discuss prioritization of testing based on high-throughput screening assays (HTS), QSAR modeling, high-throughput toxicokinetics (HTTK), and exposure modelin...
We demonstrate a computational network model that integrates 18 in vitro, high-throughput screening assays measuring estrogen receptor (ER) binding, dimerization, chromatin binding, transcriptional activation and ER-dependent cell proliferation. The network model uses activity pa...
The CTD2 Center at Emory University used high-throughput protein-protein interaction (PPI) mapping for Hippo signaling pathway profiling to rapidly unveil promising PPIs as potential therapeutic targets and advance functional understanding of signaling circuitry in cells. Read the abstract.
Over the past ten years, the US government has invested in high-throughput (HT) methods to screen chemicals for biological activity. Under the interagency Tox21 consortium and the US Environmental Protection Agency’s (EPA) ToxCast™ program, thousands of chemicals have...
USDA-ARS?s Scientific Manuscript database
In the last few years, high-throughput genomics promised to bridge the gap between plant physiology and plant sciences. In addition, high-throughput genotyping technologies facilitate marker-based selection for better performing genotypes. In strawberry, Fragaria vesca was the first reference sequen...
Human ATAD5 is an excellent biomarker for identifying genotoxic compounds because ATADS protein levels increase post-transcriptionally following exposure to a variety of DNA damaging agents. Here we report a novel quantitative high-throughput ATAD5-Iuciferase assay that can moni...
Paiva, Anthony; Shou, Wilson Z
2016-08-01
The last several years have seen the rapid adoption of the high-resolution MS (HRMS) for bioanalytical support of high throughput in vitro ADME profiling. Many capable software tools have been developed and refined to process quantitative HRMS bioanalysis data for ADME samples with excellent performance. Additionally, new software applications specifically designed for quan/qual soft spot identification workflows using HRMS have greatly enhanced the quality and efficiency of the structure elucidation process for high throughput metabolite ID in early in vitro ADME profiling. Finally, novel approaches in data acquisition and compression, as well as tools for transferring, archiving and retrieving HRMS data, are being continuously refined to tackle the issue of large data file size typical for HRMS analyses.
Automated crystallographic system for high-throughput protein structure determination.
Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F
2003-07-01
High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.
A high-throughput, multi-channel photon-counting detector with picosecond timing
NASA Astrophysics Data System (ADS)
Lapington, J. S.; Fraser, G. W.; Miller, G. M.; Ashton, T. J. R.; Jarron, P.; Despeisse, M.; Powolny, F.; Howorth, J.; Milnes, J.
2009-06-01
High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchannel plate devices with very high time resolution, and high-speed multi-channel ASIC electronics developed for the LHC at CERN, provides the necessary building blocks for a high-throughput detector system with up to 1024 parallel counting channels and 20 ps time resolution. We describe the detector and electronic design, discuss the current status of the HiContent project and present the results from a 64-channel prototype system. In the absence of an operational detector, we present measurements of the electronics performance using a pulse generator to simulate detector events. Event timing results from the NINO high-speed front-end ASIC captured using a fast digital oscilloscope are compared with data taken with the proposed electronic configuration which uses the multi-channel HPTDC timing ASIC.
Direct assembling methodologies for high-throughput bioscreening
Rodríguez-Dévora, Jorge I.; Shi, Zhi-dong; Xu, Tao
2012-01-01
Over the last few decades, high-throughput (HT) bioscreening, a technique that allows rapid screening of biochemical compound libraries against biological targets, has been widely used in drug discovery, stem cell research, development of new biomaterials, and genomics research. To achieve these ambitions, scaffold-free (or direct) assembly of biological entities of interest has become critical. Appropriate assembling methodologies are required to build an efficient HT bioscreening platform. The development of contact and non-contact assembling systems as a practical solution has been driven by a variety of essential attributes of the bioscreening system, such as miniaturization, high throughput, and high precision. The present article reviews recent progress on these assembling technologies utilized for the construction of HT bioscreening platforms. PMID:22021162
Suram, Santosh K.; Newhouse, Paul F.; Zhou, Lan; ...
2016-09-23
Combinatorial materials science strategies have accelerated materials development in a variety of fields, and we extend these strategies to enable structure-property mapping for light absorber materials, particularly in high order composition spaces. High throughput optical spectroscopy and synchrotron X-ray diffraction are combined to identify the optical properties of Bi-V-Fe oxides, leading to the identification of Bi 4V 1.5Fe 0.5O 10.5 as a light absorber with direct band gap near 2.7 eV. Here, the strategic combination of experimental and data analysis techniques includes automated Tauc analysis to estimate band gap energies from the high throughput spectroscopy data, providing an automated platformmore » for identifying new optical materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suram, Santosh K.; Newhouse, Paul F.; Zhou, Lan
Combinatorial materials science strategies have accelerated materials development in a variety of fields, and we extend these strategies to enable structure-property mapping for light absorber materials, particularly in high order composition spaces. High throughput optical spectroscopy and synchrotron X-ray diffraction are combined to identify the optical properties of Bi-V-Fe oxides, leading to the identification of Bi 4V 1.5Fe 0.5O 10.5 as a light absorber with direct band gap near 2.7 eV. Here, the strategic combination of experimental and data analysis techniques includes automated Tauc analysis to estimate band gap energies from the high throughput spectroscopy data, providing an automated platformmore » for identifying new optical materials.« less
Lambert, Nathaniel D.; Pankratz, V. Shane; Larrabee, Beth R.; Ogee-Nwankwo, Adaeze; Chen, Min-hsin; Icenogle, Joseph P.
2014-01-01
Rubella remains a social and economic burden due to the high incidence of congenital rubella syndrome (CRS) in some countries. For this reason, an accurate and efficient high-throughput measure of antibody response to vaccination is an important tool. In order to measure rubella-specific neutralizing antibodies in a large cohort of vaccinated individuals, a high-throughput immunocolorimetric system was developed. Statistical interpolation models were applied to the resulting titers to refine quantitative estimates of neutralizing antibody titers relative to the assayed neutralizing antibody dilutions. This assay, including the statistical methods developed, can be used to assess the neutralizing humoral immune response to rubella virus and may be adaptable for assessing the response to other viral vaccines and infectious agents. PMID:24391140
NASA Astrophysics Data System (ADS)
Jian, Wei; Estevez, Claudio; Chowdhury, Arshad; Jia, Zhensheng; Wang, Jianxin; Yu, Jianguo; Chang, Gee-Kung
2010-12-01
This paper presents an energy-efficient Medium Access Control (MAC) protocol for very-high-throughput millimeter-wave (mm-wave) wireless sensor communication networks (VHT-MSCNs) based on hybrid multiple access techniques of frequency division multiplexing access (FDMA) and time division multiplexing access (TDMA). An energy-efficient Superframe for wireless sensor communication network employing directional mm-wave wireless access technologies is proposed for systems that require very high throughput, such as high definition video signals, for sensing, processing, transmitting, and actuating functions. Energy consumption modeling for each network element and comparisons among various multi-access technologies in term of power and MAC layer operations are investigated for evaluating the energy-efficient improvement of proposed MAC protocol.
Suram, Santosh K; Newhouse, Paul F; Zhou, Lan; Van Campen, Douglas G; Mehta, Apurva; Gregoire, John M
2016-11-14
Combinatorial materials science strategies have accelerated materials development in a variety of fields, and we extend these strategies to enable structure-property mapping for light absorber materials, particularly in high order composition spaces. High throughput optical spectroscopy and synchrotron X-ray diffraction are combined to identify the optical properties of Bi-V-Fe oxides, leading to the identification of Bi 4 V 1.5 Fe 0.5 O 10.5 as a light absorber with direct band gap near 2.7 eV. The strategic combination of experimental and data analysis techniques includes automated Tauc analysis to estimate band gap energies from the high throughput spectroscopy data, providing an automated platform for identifying new optical materials.
improved and higher throughput methods for analysis of biomass feedstocks Agronomics-using NIR spectroscopy in-house and external client training. She has also developed improved and high-throughput methods
High-throughput and automated SAXS/USAXS experiment for industrial use at BL19B2 in SPring-8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osaka, Keiichi, E-mail: k-osaka@spring8.or.jp; Inoue, Daisuke; Sato, Masugu
A highly automated system combining a sample transfer robot with focused SR beam has been established for small-angle and ultra small-angle X-ray scattering (SAXS/USAXS) measurement at BL19B2 for industrial use of SPring-8. High-throughput data collection system can be realized by means of X-ray beam of high photon flux density concentrated by a cylindrical mirror, and a two-dimensional pixel detector PILATUS-2M. For SAXS measurement, we can obtain high-quality data within 1 minute for one exposure using this system. The sample transfer robot has a capacity of 90 samples with a large variety of shapes. The fusion of high-throughput and robotic systemmore » has enhanced the usability of SAXS/USAXS capability for industrial application.« less
USDA-ARS?s Scientific Manuscript database
The effect of refrigeration on bacterial communities within raw and pasteurized buffalo milk was studied using high-throughput sequencing. High quality samples of raw buffalo milk were obtained from five dairy farms in the Guangxi province of China. A sample of each milk was pasteurized, and both r...
High throughput integrated thermal characterization with non-contact optical calorimetry
NASA Astrophysics Data System (ADS)
Hou, Sichao; Huo, Ruiqing; Su, Ming
2017-10-01
Commonly used thermal analysis tools such as calorimeter and thermal conductivity meter are separated instruments and limited by low throughput, where only one sample is examined each time. This work reports an infrared based optical calorimetry with its theoretical foundation, which is able to provide an integrated solution to characterize thermal properties of materials with high throughput. By taking time domain temperature information of spatially distributed samples, this method allows a single device (infrared camera) to determine the thermal properties of both phase change systems (melting temperature and latent heat of fusion) and non-phase change systems (thermal conductivity and heat capacity). This method further allows these thermal properties of multiple samples to be determined rapidly, remotely, and simultaneously. In this proof-of-concept experiment, the thermal properties of a panel of 16 samples including melting temperatures, latent heats of fusion, heat capacities, and thermal conductivities have been determined in 2 min with high accuracy. Given the high thermal, spatial, and temporal resolutions of the advanced infrared camera, this method has the potential to revolutionize the thermal characterization of materials by providing an integrated solution with high throughput, high sensitivity, and short analysis time.
NASA Astrophysics Data System (ADS)
Yamada, Yusuke; Hiraki, Masahiko; Sasajima, Kumiko; Matsugaki, Naohiro; Igarashi, Noriyuki; Amano, Yasushi; Warizaya, Masaichi; Sakashita, Hitoshi; Kikuchi, Takashi; Mori, Takeharu; Toyoshima, Akio; Kishimoto, Shunji; Wakatsuki, Soichi
2010-06-01
Recent advances in high-throughput techniques for macromolecular crystallography have highlighted the importance of structure-based drug design (SBDD), and the demand for synchrotron use by pharmaceutical researchers has increased. Thus, in collaboration with Astellas Pharma Inc., we have constructed a new high-throughput macromolecular crystallography beamline, AR-NE3A, which is dedicated to SBDD. At AR-NE3A, a photon flux up to three times higher than those at existing high-throughput beams at the Photon Factory, AR-NW12A and BL-5A, can be realized at the same sample positions. Installed in the experimental hutch are a high-precision diffractometer, fast-readout, high-gain CCD detector, and sample exchange robot capable of handling more than two hundred cryo-cooled samples stored in a Dewar. To facilitate high-throughput data collection required for pharmaceutical research, fully automated data collection and processing systems have been developed. Thus, sample exchange, centering, data collection, and data processing are automatically carried out based on the user's pre-defined schedule. Although Astellas Pharma Inc. has a priority access to AR-NE3A, the remaining beam time is allocated to general academic and other industrial users.
High-Throughput Intracellular Antimicrobial Susceptibility Testing of Legionella pneumophila.
Chiaraviglio, Lucius; Kirby, James E
2015-12-01
Legionella pneumophila is a Gram-negative opportunistic human pathogen that causes a severe pneumonia known as Legionnaires' disease. Notably, in the human host, the organism is believed to replicate solely within an intracellular compartment, predominantly within pulmonary macrophages. Consequently, successful therapy is predicated on antimicrobials penetrating into this intracellular growth niche. However, standard antimicrobial susceptibility testing methods test solely for extracellular growth inhibition. Here, we make use of a high-throughput assay to characterize intracellular growth inhibition activity of known antimicrobials. For select antimicrobials, high-resolution dose-response analysis was then performed to characterize and compare activity levels in both macrophage infection and axenic growth assays. Results support the superiority of several classes of nonpolar antimicrobials in abrogating intracellular growth. Importantly, our assay results show excellent correlations with prior clinical observations of antimicrobial efficacy. Furthermore, we also show the applicability of high-throughput automation to two- and three-dimensional synergy testing. High-resolution isocontour isobolograms provide in vitro support for specific combination antimicrobial therapy. Taken together, findings suggest that high-throughput screening technology may be successfully applied to identify and characterize antimicrobials that target bacterial pathogens that make use of an intracellular growth niche. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
High-Throughput Intracellular Antimicrobial Susceptibility Testing of Legionella pneumophila
Chiaraviglio, Lucius
2015-01-01
Legionella pneumophila is a Gram-negative opportunistic human pathogen that causes a severe pneumonia known as Legionnaires' disease. Notably, in the human host, the organism is believed to replicate solely within an intracellular compartment, predominantly within pulmonary macrophages. Consequently, successful therapy is predicated on antimicrobials penetrating into this intracellular growth niche. However, standard antimicrobial susceptibility testing methods test solely for extracellular growth inhibition. Here, we make use of a high-throughput assay to characterize intracellular growth inhibition activity of known antimicrobials. For select antimicrobials, high-resolution dose-response analysis was then performed to characterize and compare activity levels in both macrophage infection and axenic growth assays. Results support the superiority of several classes of nonpolar antimicrobials in abrogating intracellular growth. Importantly, our assay results show excellent correlations with prior clinical observations of antimicrobial efficacy. Furthermore, we also show the applicability of high-throughput automation to two- and three-dimensional synergy testing. High-resolution isocontour isobolograms provide in vitro support for specific combination antimicrobial therapy. Taken together, findings suggest that high-throughput screening technology may be successfully applied to identify and characterize antimicrobials that target bacterial pathogens that make use of an intracellular growth niche. PMID:26392509
High-throughput sequence alignment using Graphics Processing Units
Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh
2007-01-01
Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. PMID:18070356
Buntin, Nirunya; Hongpattarakere, Tipparat; Ritari, Jarmo; Douillard, François P; Paulin, Lars; Boeren, Sjef; Shetty, Sudarshan A; de Vos, Willem M
2017-01-15
The draft genomes of Lactobacillus plantarum strains isolated from Asian fermented foods, infant feces, and shrimp intestines were sequenced and compared to those of well-studied strains. Among 28 strains of L. plantarum, variations in the genomic features involved in ecological adaptation were elucidated. The genome sizes ranged from approximately 3.1 to 3.5 Mb, of which about 2,932 to 3,345 protein-coding sequences (CDS) were predicted. The food-derived isolates contained a higher number of carbohydrate metabolism-associated genes than those from infant feces. This observation correlated to their phenotypic carbohydrate metabolic profile, indicating their ability to metabolize the largest range of sugars. Surprisingly, two strains (P14 and P76) isolated from fermented fish utilized inulin. β-Fructosidase, the inulin-degrading enzyme, was detected in the supernatants and cell wall extracts of both strains. No activity was observed in the cytoplasmic fraction, indicating that this key enzyme was either membrane-bound or extracellularly secreted. From genomic mining analysis, a predicted inulin operon of fosRABCDXE, which encodes β-fructosidase and many fructose transporting proteins, was found within the genomes of strains P14 and P76. Moreover, pts1BCA genes, encoding sucrose-specific IIBCA components involved in sucrose transport, were also identified. The proteomic analysis revealed the mechanism and functional characteristic of the fosRABCDXE operon involved in the inulin utilization of L. plantarum The expression levels of the fos operon and pst genes were upregulated at mid-log phase. FosE and the LPXTG-motif cell wall anchored β-fructosidase were induced to a high abundance when inulin was present as a carbon source. Inulin is a long-chain carbohydrate that may act as a prebiotic, which provides many health benefits to the host by selectively stimulating the growth and activity of beneficial bacteria in the colon. While certain lactobacilli can catabolize inulin, this has not yet been described for Lactobacillus plantarum, and an associated putative inulin operon has not been reported in this species. By using comparative and functional genomics, we showed that two L. plantarum strains utilized inulin and identified functional inulin operons in their genomes. The proteogenomic data revealed that inulin degradation and uptake routes, which related to the fosRABCDXE operon and pstBCA genes, were widely expressed among L. plantarum strains. The present work provides a novel understanding of gene regulation and mechanisms of inulin utilization in probiotic L. plantarum generating opportunities for synbiotic product development. Copyright © 2016 American Society for Microbiology.
Buntin, Nirunya; Hongpattarakere, Tipparat; Ritari, Jarmo; Douillard, François P.; Paulin, Lars; Boeren, Sjef; Shetty, Sudarshan A.
2016-01-01
ABSTRACT The draft genomes of Lactobacillus plantarum strains isolated from Asian fermented foods, infant feces, and shrimp intestines were sequenced and compared to those of well-studied strains. Among 28 strains of L. plantarum, variations in the genomic features involved in ecological adaptation were elucidated. The genome sizes ranged from approximately 3.1 to 3.5 Mb, of which about 2,932 to 3,345 protein-coding sequences (CDS) were predicted. The food-derived isolates contained a higher number of carbohydrate metabolism-associated genes than those from infant feces. This observation correlated to their phenotypic carbohydrate metabolic profile, indicating their ability to metabolize the largest range of sugars. Surprisingly, two strains (P14 and P76) isolated from fermented fish utilized inulin. β-Fructosidase, the inulin-degrading enzyme, was detected in the supernatants and cell wall extracts of both strains. No activity was observed in the cytoplasmic fraction, indicating that this key enzyme was either membrane-bound or extracellularly secreted. From genomic mining analysis, a predicted inulin operon of fosRABCDXE, which encodes β-fructosidase and many fructose transporting proteins, was found within the genomes of strains P14 and P76. Moreover, pts1BCA genes, encoding sucrose-specific IIBCA components involved in sucrose transport, were also identified. The proteomic analysis revealed the mechanism and functional characteristic of the fosRABCDXE operon involved in the inulin utilization of L. plantarum. The expression levels of the fos operon and pst genes were upregulated at mid-log phase. FosE and the LPXTG-motif cell wall anchored β-fructosidase were induced to a high abundance when inulin was present as a carbon source. IMPORTANCE Inulin is a long-chain carbohydrate that may act as a prebiotic, which provides many health benefits to the host by selectively stimulating the growth and activity of beneficial bacteria in the colon. While certain lactobacilli can catabolize inulin, this has not yet been described for Lactobacillus plantarum, and an associated putative inulin operon has not been reported in this species. By using comparative and functional genomics, we showed that two L. plantarum strains utilized inulin and identified functional inulin operons in their genomes. The proteogenomic data revealed that inulin degradation and uptake routes, which related to the fosRABCDXE operon and pstBCA genes, were widely expressed among L. plantarum strains. The present work provides a novel understanding of gene regulation and mechanisms of inulin utilization in probiotic L. plantarum generating opportunities for synbiotic product development. PMID:27815279
Sasagawa, Yohei; Danno, Hiroki; Takada, Hitomi; Ebisawa, Masashi; Tanaka, Kaori; Hayashi, Tetsutaro; Kurisaki, Akira; Nikaido, Itoshi
2018-03-09
High-throughput single-cell RNA-seq methods assign limited unique molecular identifier (UMI) counts as gene expression values to single cells from shallow sequence reads and detect limited gene counts. We thus developed a high-throughput single-cell RNA-seq method, Quartz-Seq2, to overcome these issues. Our improvements in the reaction steps make it possible to effectively convert initial reads to UMI counts, at a rate of 30-50%, and detect more genes. To demonstrate the power of Quartz-Seq2, we analyzed approximately 10,000 transcriptomes from in vitro embryonic stem cells and an in vivo stromal vascular fraction with a limited number of reads.
Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad
2016-05-26
With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less
Optimizing multi-dimensional high throughput screening using zebrafish
Truong, Lisa; Bugel, Sean M.; Chlebowski, Anna; Usenko, Crystal Y.; Simonich, Michael T.; Massey Simonich, Staci L.; Tanguay, Robert L.
2016-01-01
The use of zebrafish for high throughput screening (HTS) for chemical bioactivity assessments is becoming routine in the fields of drug discovery and toxicology. Here we report current recommendations from our experiences in zebrafish HTS. We compared the effects of different high throughput chemical delivery methods on nominal water concentration, chemical sorption to multi-well polystyrene plates, transcription responses, and resulting whole animal responses. We demonstrate that digital dispensing consistently yields higher data quality and reproducibility compared to standard plastic tip-based liquid handling. Additionally, we illustrate the challenges in using this sensitive model for chemical assessment when test chemicals have trace impurities. Adaptation of these better practices for zebrafish HTS should increase reproducibility across laboratories. PMID:27453428
Combinatorial and high-throughput approaches in polymer science
NASA Astrophysics Data System (ADS)
Zhang, Huiqi; Hoogenboom, Richard; Meier, Michael A. R.; Schubert, Ulrich S.
2005-01-01
Combinatorial and high-throughput approaches have become topics of great interest in the last decade due to their potential ability to significantly increase research productivity. Recent years have witnessed a rapid extension of these approaches in many areas of the discovery of new materials including pharmaceuticals, inorganic materials, catalysts and polymers. This paper mainly highlights our progress in polymer research by using an automated parallel synthesizer, microwave synthesizer and ink-jet printer. The equipment and methodologies in our experiments, the high-throughput experimentation of different polymerizations (such as atom transfer radical polymerization, cationic ring-opening polymerization and emulsion polymerization) and the automated matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy (MALDI-TOF MS) sample preparation are described.
Zador, Anthony M.; Dubnau, Joshua; Oyibo, Hassana K.; Zhan, Huiqing; Cao, Gang; Peikon, Ian D.
2012-01-01
Connectivity determines the function of neural circuits. Historically, circuit mapping has usually been viewed as a problem of microscopy, but no current method can achieve high-throughput mapping of entire circuits with single neuron precision. Here we describe a novel approach to determining connectivity. We propose BOINC (“barcoding of individual neuronal connections”), a method for converting the problem of connectivity into a form that can be read out by high-throughput DNA sequencing. The appeal of using sequencing is that its scale—sequencing billions of nucleotides per day is now routine—is a natural match to the complexity of neural circuits. An inexpensive high-throughput technique for establishing circuit connectivity at single neuron resolution could transform neuroscience research. PMID:23109909
High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME
NASA Astrophysics Data System (ADS)
Otis, Richard A.; Liu, Zi-Kui
2017-05-01
One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.
Ramakumar, Adarsh; Subramanian, Uma; Prasanna, Pataje G S
2015-11-01
High-throughput individual diagnostic dose assessment is essential for medical management of radiation-exposed subjects after a mass casualty. Cytogenetic assays such as the Dicentric Chromosome Assay (DCA) are recognized as the gold standard by international regulatory authorities. DCA is a multi-step and multi-day bioassay. DCA, as described in the IAEA manual, can be used to assess dose up to 4-6 weeks post-exposure quite accurately but throughput is still a major issue and automation is very essential. The throughput is limited, both in terms of sample preparation as well as analysis of chromosome aberrations. Thus, there is a need to design and develop novel solutions that could utilize extensive laboratory automation for sample preparation, and bioinformatics approaches for chromosome-aberration analysis to overcome throughput issues. We have transitioned the bench-based cytogenetic DCA to a coherent process performing high-throughput automated biodosimetry for individual dose assessment ensuring quality control (QC) and quality assurance (QA) aspects in accordance with international harmonized protocols. A Laboratory Information Management System (LIMS) is designed, implemented and adapted to manage increased sample processing capacity, develop and maintain standard operating procedures (SOP) for robotic instruments, avoid data transcription errors during processing, and automate analysis of chromosome-aberrations using an image analysis platform. Our efforts described in this paper intend to bridge the current technological gaps and enhance the potential application of DCA for a dose-based stratification of subjects following a mass casualty. This paper describes one such potential integrated automated laboratory system and functional evolution of the classical DCA towards increasing critically needed throughput. Published by Elsevier B.V.
Development of Droplet Microfluidics Enabling High-Throughput Single-Cell Analysis.
Wen, Na; Zhao, Zhan; Fan, Beiyuan; Chen, Deyong; Men, Dong; Wang, Junbo; Chen, Jian
2016-07-05
This article reviews recent developments in droplet microfluidics enabling high-throughput single-cell analysis. Five key aspects in this field are included in this review: (1) prototype demonstration of single-cell encapsulation in microfluidic droplets; (2) technical improvements of single-cell encapsulation in microfluidic droplets; (3) microfluidic droplets enabling single-cell proteomic analysis; (4) microfluidic droplets enabling single-cell genomic analysis; and (5) integrated microfluidic droplet systems enabling single-cell screening. We examine the advantages and limitations of each technique and discuss future research opportunities by focusing on key performances of throughput, multifunctionality, and absolute quantification.
The Stochastic Human Exposure and Dose Simulation Model – High-Throughput (SHEDS-HT) is a U.S. Environmental Protection Agency research tool for predicting screening-level (low-tier) exposures to chemicals in consumer products. This course will present an overview of this m...
Chemical perturbation of vascular development is a putative toxicity pathway which may result in developmental toxicity. EPA’s high-throughput screening (HTS) ToxCast program contains assays which measure cellular signals and biological processes critical for blood vessel develop...
The past five years have witnessed a rapid shift in the exposure science and toxicology communities towards high-throughput (HT) analyses of chemicals as potential stressors of human and ecological health. Modeling efforts have largely led the charge in the exposure science field...
Forecasting Exposure in Order to Use High Throughput Hazard Data in a Risk-based Context (WC9)
The ToxCast program and Tox21 consortium have evaluated over 8000 chemicals using in vitro high-throughput screening (HTS) to identify potential hazards. Complementary exposure science needed to assess risk, and the U.S. Environmental Protection Agency (EPA)’s ExpoCast initiative...
An industrial engineering approach to laboratory automation for high throughput screening
Menke, Karl C.
2000-01-01
Across the pharmaceutical industry, there are a variety of approaches to laboratory automation for high throughput screening. At Sphinx Pharmaceuticals, the principles of industrial engineering have been applied to systematically identify and develop those automated solutions that provide the greatest value to the scientists engaged in lead generation. PMID:18924701
The U.S. Environmental Protection Agency’s ToxCast program has screened thousands of chemicals for biological activity, primarily using high-throughput in vitro bioassays. Adverse outcome pathways (AOPs) offer a means to link pathway-specific biological activities with pote...
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of bioche...
The Environmental Protection Agency has implemented a high throughput screening program, ToxCast, to quickly evaluate large numbers of chemicals for their effects on hundreds of different biological targets. To understand how these measurements relate to adverse effects in an or...
USDA-ARS?s Scientific Manuscript database
The rapid advancement in high-throughput SNP genotyping technologies along with next generation sequencing (NGS) platforms has decreased the cost, improved the quality of large-scale genome surveys, and allowed specialty crops with limited genomic resources such as carrot (Daucus carota) to access t...
The U.S. EPA’s Endocrine Disruptor Screening Program (EDSP) and Office of Research and Development (ORD) are currently developing high throughput assays to screen chemicals that may alter the thyroid hormone pathway. One potential target in this pathway is the sodium iodide...
Thousands of chemicals have been profiled by high-throughput screening (HTS) programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics (TK). While HTS generates in vitro bioactivity d...
Inhibition of Retinoblastoma Protein Inactivation
2017-11-01
SUBJECT TERMS cell cycle, Retinoblastoma protein, E2F transcription factor, high throughput screen, drug discovery, x-ray crystallography 16. SECURITY...screening by x-ray crystallography . 2.0 KEYWORDS Retinoblastoma (Rb) pathway, E2F transcription factor, cancer, cell-cycle inhibition, activation...modulation, inhibition, high throughput screening, fragment-based screening, x-ray crystallography . 3.0 ACCOMPLISHMENTS Summary: We
High Throughput Sequence Analysis for Disease Resistance in Maize
USDA-ARS?s Scientific Manuscript database
Preliminary results of a computational analysis of high throughput sequencing data from Zea mays and the fungus Aspergillus are reported. The Illumina Genome Analyzer was used to sequence RNA samples from two strains of Z. mays (Va35 and Mp313) collected over a time course as well as several specie...
High Throughput Prioritization for Integrated Toxicity Testing Based on ToxCast Chemical Profiling
The rational prioritization of chemicals for integrated toxicity testing is a central goal of the U.S. EPA’s ToxCast™ program (http://epa.gov/ncct/toxcast/). ToxCast includes a wide-ranging battery of over 500 in vitro high-throughput screening assays which in Phase I was used to...
The focus of this meeting is the SAP's review and comment on the Agency's proposed high-throughput computational model of androgen receptor pathway activity as an alternative to the current Tier 1 androgen receptor assay (OCSPP 890.1150: Androgen Receptor Binding Rat Prostate Cyt...
Evaluation of High-throughput Genotoxicity Assays Used in Profiling the US EPA ToxCast Chemicals
Three high-throughput screening (HTS) genotoxicity assays-GreenScreen HC GADD45a-GFP (Gentronix Ltd.), CellCiphr p53 (Cellumen Inc.) and CellSensor p53RE-bla (Invitrogen Corp.)-were used to analyze the collection of 320 predominantly pesticide active compounds being tested in Pha...
The US EPA’s ToxCastTM program seeks to combine advances in high-throughput screening technology with methodologies from statistics and computer science to develop high-throughput decision support tools for assessing chemical hazard and risk. To develop new methods of analysis of...
Collaborative Core Research Program for Chemical-Biological Warfare Defense
2015-01-04
Discovery through High Throughput Screening (HTS) and Fragment-Based Drug Design (FBDD...Discovery through High Throughput Screening (HTS) and Fragment-Based Drug Design (FBDD) Current pharmaceutical approaches involving drug discovery...structural analysis and docking program generally known as fragment based drug design (FBDD). The main advantage of using these approaches is that
The U.S. EPA’s Endocrine Disruptor Screening Program (EDSP) and Office of Research and Development (ORD) are currently developing high throughput assays to screen chemicals that may alter the thyroid hormone pathway. One potential target in this pathway is the sodium iodide sympo...
USDA-ARS?s Scientific Manuscript database
Production and recycling of recombinant sweetener peptides in industrial biorefineries involves the evaluation of large numbers of genes and proteins. High-throughput integrated robotic molecular biology platforms that have the capacity to rapidly synthesize, clone, and express heterologous gene ope...
USEPA’s ToxCast program has generated high-throughput bioactivity screening (HTS) data on thousands of chemicals. The ToxCast program has described and annotated the HTS assay battery with respect to assay design and target information (e.g., gene target). Recent stakeholder and ...
PLASMA PROTEIN PROFILING AS A HIGH THROUGHPUT TOOL FOR CHEMICAL SCREENING USING A SMALL FISH MODEL
Hudson, R. Tod, Michael J. Hemmer, Kimberly A. Salinas, Sherry S. Wilkinson, James Watts, James T. Winstead, Peggy S. Harris, Amy Kirkpatrick and Calvin C. Walker. In press. Plasma Protein Profiling as a High Throughput Tool for Chemical Screening Using a Small Fish Model (Abstra...
USDA-ARS?s Scientific Manuscript database
Bermuda grass samples were examined by transmission electron microscopy and 28-30 nm spherical virus particles were observed. Total RNA from these plants was subjected to high throughput sequencing (HTS). The nearly full genome sequence of a previously uncharacterized Panicovirus was identified from...
USDA-ARS?s Scientific Manuscript database
High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat (Triticum aestivum L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect s...
Handheld Fluorescence Microscopy based Flow Analyzer.
Saxena, Manish; Jayakumar, Nitin; Gorthi, Sai Siva
2016-03-01
Fluorescence microscopy has the intrinsic advantages of favourable contrast characteristics and high degree of specificity. Consequently, it has been a mainstay in modern biological inquiry and clinical diagnostics. Despite its reliable nature, fluorescence based clinical microscopy and diagnostics is a manual, labour intensive and time consuming procedure. The article outlines a cost-effective, high throughput alternative to conventional fluorescence imaging techniques. With system level integration of custom-designed microfluidics and optics, we demonstrate fluorescence microscopy based imaging flow analyzer. Using this system we have imaged more than 2900 FITC labeled fluorescent beads per minute. This demonstrates high-throughput characteristics of our flow analyzer in comparison to conventional fluorescence microscopy. The issue of motion blur at high flow rates limits the achievable throughput in image based flow analyzers. Here we address the issue by computationally deblurring the images and show that this restores the morphological features otherwise affected by motion blur. By further optimizing concentration of the sample solution and flow speeds, along with imaging multiple channels simultaneously, the system is capable of providing throughput of about 480 beads per second.
High-Throughput, Motility-Based Sorter for Microswimmers such as C. elegans
Yuan, Jinzhou; Zhou, Jessie; Raizen, David M.; Bau, Haim H.
2015-01-01
Animal motility varies with genotype, disease, aging, and environmental conditions. In many studies, it is desirable to carry out high throughput motility-based sorting to isolate rare animals for, among other things, forward genetic screens to identify genetic pathways that regulate phenotypes of interest. Many commonly used screening processes are labor-intensive, lack sensitivity, and require extensive investigator training. Here, we describe a sensitive, high throughput, automated, motility-based method for sorting nematodes. Our method is implemented in a simple microfluidic device capable of sorting thousands of animals per hour per module, and is amenable to parallelism. The device successfully enriches for known C. elegans motility mutants. Furthermore, using this device, we isolate low-abundance mutants capable of suppressing the somnogenic effects of the flp-13 gene, which regulates C. elegans sleep. By performing genetic complementation tests, we demonstrate that our motility-based sorting device efficiently isolates mutants for the same gene identified by tedious visual inspection of behavior on an agar surface. Therefore, our motility-based sorter is capable of performing high throughput gene discovery approaches to investigate fundamental biological processes. PMID:26008643
Bifrost: a Modular Python/C++ Framework for Development of High-Throughput Data Analysis Pipelines
NASA Astrophysics Data System (ADS)
Cranmer, Miles; Barsdell, Benjamin R.; Price, Danny C.; Garsden, Hugh; Taylor, Gregory B.; Dowell, Jayce; Schinzel, Frank; Costa, Timothy; Greenhill, Lincoln J.
2017-01-01
Large radio interferometers have data rates that render long-term storage of raw correlator data infeasible, thus motivating development of real-time processing software. For high-throughput applications, processing pipelines are challenging to design and implement. Motivated by science efforts with the Long Wavelength Array, we have developed Bifrost, a novel Python/C++ framework that eases the development of high-throughput data analysis software by packaging algorithms as black box processes in a directed graph. This strategy to modularize code allows astronomers to create parallelism without code adjustment. Bifrost uses CPU/GPU ’circular memory’ data buffers that enable ready introduction of arbitrary functions into the processing path for ’streams’ of data, and allow pipelines to automatically reconfigure in response to astrophysical transient detection or input of new observing settings. We have deployed and tested Bifrost at the latest Long Wavelength Array station, in Sevilleta National Wildlife Refuge, NM, where it handles throughput exceeding 10 Gbps per CPU core.
Taggart, David J.; Camerlengo, Terry L.; Harrison, Jason K.; Sherrer, Shanen M.; Kshetry, Ajay K.; Taylor, John-Stephen; Huang, Kun; Suo, Zucai
2013-01-01
Cellular genomes are constantly damaged by endogenous and exogenous agents that covalently and structurally modify DNA to produce DNA lesions. Although most lesions are mended by various DNA repair pathways in vivo, a significant number of damage sites persist during genomic replication. Our understanding of the mutagenic outcomes derived from these unrepaired DNA lesions has been hindered by the low throughput of existing sequencing methods. Therefore, we have developed a cost-effective high-throughput short oligonucleotide sequencing assay that uses next-generation DNA sequencing technology for the assessment of the mutagenic profiles of translesion DNA synthesis catalyzed by any error-prone DNA polymerase. The vast amount of sequencing data produced were aligned and quantified by using our novel software. As an example, the high-throughput short oligonucleotide sequencing assay was used to analyze the types and frequencies of mutations upstream, downstream and at a site-specifically placed cis–syn thymidine–thymidine dimer generated individually by three lesion-bypass human Y-family DNA polymerases. PMID:23470999
Zmijan, Robert; Jonnalagadda, Umesh S.; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn
2015-01-01
We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint. PMID:29456838
NASA Astrophysics Data System (ADS)
Watanabe, A.; Furukawa, H.
2018-04-01
The resolution of multichannel Fourier transform (McFT) spectroscopy is insufficient for many applications despite its extreme advantage of high throughput. We propose an improved configuration to realise both performance using a two-dimensional area sensor. For the spectral resolution, we obtained the interferogram of a larger optical path difference by shifting the area sensor without altering any optical components. The non-linear phase error of the interferometer was successfully corrected using a phase-compensation calculation. Warping compensation was also applied to realise a higher throughput to accumulate the signal between vertical pixels. Our approach significantly improved the resolution and signal-to-noise ratio by factors of 1.7 and 34, respectively. This high-resolution and high-sensitivity McFT spectrometer will be useful for detecting weak light signals such as those in non-invasive diagnosis.
Na, Hong; Laver, John D.; Jeon, Jouhyun; Singh, Fateh; Ancevicius, Kristin; Fan, Yujie; Cao, Wen Xi; Nie, Kun; Yang, Zhenglin; Luo, Hua; Wang, Miranda; Rissland, Olivia; Westwood, J. Timothy; Kim, Philip M.; Smibert, Craig A.; Lipshitz, Howard D.; Sidhu, Sachdev S.
2016-01-01
Post-transcriptional regulation of mRNAs plays an essential role in the control of gene expression. mRNAs are regulated in ribonucleoprotein (RNP) complexes by RNA-binding proteins (RBPs) along with associated protein and noncoding RNA (ncRNA) cofactors. A global understanding of post-transcriptional control in any cell type requires identification of the components of all of its RNP complexes. We have previously shown that these complexes can be purified by immunoprecipitation using anti-RBP synthetic antibodies produced by phage display. To develop the large number of synthetic antibodies required for a global analysis of RNP complex composition, we have established a pipeline that combines (i) a computationally aided strategy for design of antigens located outside of annotated domains, (ii) high-throughput antigen expression and purification in Escherichia coli, and (iii) high-throughput antibody selection and screening. Using this pipeline, we have produced 279 antibodies against 61 different protein components of Drosophila melanogaster RNPs. Together with those produced in our low-throughput efforts, we have a panel of 311 antibodies for 67 RNP complex proteins. Tests of a subset of our antibodies demonstrated that 89% immunoprecipitate their endogenous target from embryo lysate. This panel of antibodies will serve as a resource for global studies of RNP complexes in Drosophila. Furthermore, our high-throughput pipeline permits efficient production of synthetic antibodies against any large set of proteins. PMID:26847261
Rizvi, Imran; Moon, Sangjun; Hasan, Tayyaba; Demirci, Utkan
2013-01-01
In vitro 3D cancer models that provide a more accurate representation of disease in vivo are urgently needed to improve our understanding of cancer pathology and to develop better cancer therapies. However, development of 3D models that are based on manual ejection of cells from micropipettes suffer from inherent limitations such as poor control over cell density, limited repeatability, low throughput, and, in the case of coculture models, lack of reproducible control over spatial distance between cell types (e.g., cancer and stromal cells). In this study, we build on a recently introduced 3D model in which human ovarian cancer (OVCAR-5) cells overlaid on Matrigel™ spontaneously form multicellular acini. We introduce a high-throughput automated cell printing system to bioprint a 3D coculture model using cancer cells and normal fibroblasts micropatterned on Matrigel™. Two cell types were patterned within a spatially controlled microenvironment (e.g., cell density, cell-cell distance) in a high-throughput and reproducible manner; both cell types remained viable during printing and continued to proliferate following patterning. This approach enables the miniaturization of an established macro-scale 3D culture model and would allow systematic investigation into the multiple unknown regulatory feedback mechanisms between tumor and stromal cells and provide a tool for high-throughput drug screening. PMID:21298805
Da Silva, Laeticia; Collino, Sebastiano; Cominetti, Ornella; Martin, Francois-Pierre; Montoliu, Ivan; Moreno, Sergio Oller; Corthesy, John; Kaput, Jim; Kussmann, Martin; Monteiro, Jacqueline Pontes; Guiraud, Seu Ping
2016-09-01
There is increasing interest in the profiling and quantitation of methionine pathway metabolites for health management research. Currently, several analytical approaches are required to cover metabolites and co-factors. We report the development and the validation of a method for the simultaneous detection and quantitation of 13 metabolites in red blood cells. The method, validated in a cohort of healthy human volunteers, shows a high level of accuracy and reproducibility. This high-throughput protocol provides a robust coverage of central metabolites and co-factors in one single analysis and in a high-throughput fashion. In large-scale clinical settings, the use of such an approach will significantly advance the field of nutritional research in health and disease.
High-Throughput Quantitative Lipidomics Analysis of Nonesterified Fatty Acids in Human Plasma.
Christinat, Nicolas; Morin-Rivron, Delphine; Masoodi, Mojgan
2016-07-01
We present a high-throughput, nontargeted lipidomics approach using liquid chromatography coupled to high-resolution mass spectrometry for quantitative analysis of nonesterified fatty acids. We applied this method to screen a wide range of fatty acids from medium-chain to very long-chain (8 to 24 carbon atoms) in human plasma samples. The method enables us to chromatographically separate branched-chain species from their straight-chain isomers as well as separate biologically important ω-3 and ω-6 polyunsaturated fatty acids. We used 51 fatty acid species to demonstrate the quantitative capability of this method with quantification limits in the nanomolar range; however, this method is not limited only to these fatty acid species. High-throughput sample preparation was developed and carried out on a robotic platform that allows extraction of 96 samples simultaneously within 3 h. This high-throughput platform was used to assess the influence of different types of human plasma collection and preparation on the nonesterified fatty acid profile of healthy donors. Use of the anticoagulants EDTA and heparin has been compared with simple clotting, and only limited changes have been detected in most nonesterified fatty acid concentrations.
High throughput electrospinning of high-quality nanofibers via an aluminum disk spinneret
NASA Astrophysics Data System (ADS)
Zheng, Guokuo
In this work, a simple and efficient needleless high throughput electrospinning process using an aluminum disk spinneret with 24 holes is described. Electrospun mats produced by this setup consisted of fine fibers (nano-sized) of the highest quality while the productivity (yield) was many times that obtained from conventional single-needle electrospinning. The goal was to produce scaled-up amounts of the same or better quality nanofibers under variable concentration, voltage, and the working distance than those produced with the single needle lab setting. The fiber mats produced were either polymer or ceramic (such as molybdenum trioxide nanofibers). Through experimentation the optimum process conditions were defined to be: 24 kilovolt, a distance to collector of 15cm. More diluted solutions resulted in smaller diameter fibers. Comparing the morphologies of the nanofibers of MoO3 produced by both the traditional and the high throughput set up it was found that they were very similar. Moreover, the nanofibers production rate is nearly 10 times than that of traditional needle electrospinning. Thus, the high throughput process has the potential to become an industrial nanomanufacturing process and the materials processed by it may be used as filtration devices, in tissue engineering, and as sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less
Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao
2016-04-01
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.
Bond, Thomas E H; Sorenson, Alanna E; Schaeffer, Patrick M
2017-12-01
Biotin protein ligase (BirA) has been identified as an emerging drug target in Mycobacterium tuberculosis due to its essential metabolic role. Indeed, it is the only enzyme capable of covalently attaching biotin onto the biotin carboxyl carrier protein subunit of the acetyl-CoA carboxylase. Despite recent interest in this protein, there is still a gap in cost-effective high-throughput screening assays for rapid identification of mycobacterial BirA-targeting inhibitors. We present for the first time the cloning, expression, purification of mycobacterial GFP-tagged BirA and its application for the development of a high-throughput assay building on the principle of differential scanning fluorimetry of GFP-tagged proteins. The data obtained in this study reveal how biotin and ATP significantly increase the thermal stability (ΔT m =+16.5°C) of M. tuberculosis BirA and lead to formation of a high affinity holoenzyme complex (K obs =7.7nM). The new findings and mycobacterial BirA high-throughput assay presented in this work could provide an efficient platform for future anti-tubercular drug discovery campaigns. Copyright © 2017 Elsevier GmbH. All rights reserved.
De Diego, Nuria; Fürst, Tomáš; Humplík, Jan F; Ugena, Lydia; Podlešáková, Kateřina; Spíchal, Lukáš
2017-01-01
High-throughput plant phenotyping platforms provide new possibilities for automated, fast scoring of several plant growth and development traits, followed over time using non-invasive sensors. Using Arabidops is as a model offers important advantages for high-throughput screening with the opportunity to extrapolate the results obtained to other crops of commercial interest. In this study we describe the development of a highly reproducible high-throughput Arabidopsis in vitro bioassay established using our OloPhen platform, suitable for analysis of rosette growth in multi-well plates. This method was successfully validated on example of multivariate analysis of Arabidopsis rosette growth in different salt concentrations and the interaction with varying nutritional composition of the growth medium. Several traits such as changes in the rosette area, relative growth rate, survival rate and homogeneity of the population are scored using fully automated RGB imaging and subsequent image analysis. The assay can be used for fast screening of the biological activity of chemical libraries, phenotypes of transgenic or recombinant inbred lines, or to search for potential quantitative trait loci. It is especially valuable for selecting genotypes or growth conditions that improve plant stress tolerance.
Inertial-ordering-assisted droplet microfluidics for high-throughput single-cell RNA-sequencing.
Moon, Hui-Sung; Je, Kwanghwi; Min, Jae-Woong; Park, Donghyun; Han, Kyung-Yeon; Shin, Seung-Ho; Park, Woong-Yang; Yoo, Chang Eun; Kim, Shin-Hyun
2018-02-27
Single-cell RNA-seq reveals the cellular heterogeneity inherent in the population of cells, which is very important in many clinical and research applications. Recent advances in droplet microfluidics have achieved the automatic isolation, lysis, and labeling of single cells in droplet compartments without complex instrumentation. However, barcoding errors occurring in the cell encapsulation process because of the multiple-beads-in-droplet and insufficient throughput because of the low concentration of beads for avoiding multiple-beads-in-a-droplet remain important challenges for precise and efficient expression profiling of single cells. In this study, we developed a new droplet-based microfluidic platform that significantly improved the throughput while reducing barcoding errors through deterministic encapsulation of inertially ordered beads. Highly concentrated beads containing oligonucleotide barcodes were spontaneously ordered in a spiral channel by an inertial effect, which were in turn encapsulated in droplets one-by-one, while cells were simultaneously encapsulated in the droplets. The deterministic encapsulation of beads resulted in a high fraction of single-bead-in-a-droplet and rare multiple-beads-in-a-droplet although the bead concentration increased to 1000 μl -1 , which diminished barcoding errors and enabled accurate high-throughput barcoding. We successfully validated our device with single-cell RNA-seq. In addition, we found that multiple-beads-in-a-droplet, generated using a normal Drop-Seq device with a high concentration of beads, underestimated transcript numbers and overestimated cell numbers. This accurate high-throughput platform can expand the capability and practicality of Drop-Seq in single-cell analysis.
Chatterjee, Anirban; Mirer, Paul L; Zaldivar Santamaria, Elvira; Klapperich, Catherine; Sharon, Andre; Sauer-Budge, Alexis F
2010-06-01
The life science and healthcare communities have been redefining the importance of ribonucleic acid (RNA) through the study of small molecule RNA (in RNAi/siRNA technologies), micro RNA (in cancer research and stem cell research), and mRNA (gene expression analysis for biologic drug targets). Research in this field increasingly requires efficient and high-throughput isolation techniques for RNA. Currently, several commercial kits are available for isolating RNA from cells. Although the quality and quantity of RNA yielded from these kits is sufficiently good for many purposes, limitations exist in terms of extraction efficiency from small cell populations and the ability to automate the extraction process. Traditionally, automating a process decreases the cost and personnel time while simultaneously increasing the throughput and reproducibility. As the RNA field matures, new methods for automating its extraction, especially from low cell numbers and in high throughput, are needed to achieve these improvements. The technology presented in this article is a step toward this goal. The method is based on a solid-phase extraction technology using a porous polymer monolith (PPM). A novel cell lysis approach and a larger binding surface throughout the PPM extraction column ensure a high yield from small starting samples, increasing sensitivity and reducing indirect costs in cell culture and sample storage. The method ensures a fast and simple procedure for RNA isolation from eukaryotic cells, with a high yield both in terms of quality and quantity. The technique is amenable to automation and streamlined workflow integration, with possible miniaturization of the sample handling process making it suitable for high-throughput applications.
Ultra-high-throughput Production of III-V/Si Wafer for Electronic and Photonic Applications
Geum, Dae-Myeong; Park, Min-Su; Lim, Ju Young; Yang, Hyun-Duk; Song, Jin Dong; Kim, Chang Zoo; Yoon, Euijoon; Kim, SangHyeon; Choi, Won Jun
2016-01-01
Si-based integrated circuits have been intensively developed over the past several decades through ultimate device scaling. However, the Si technology has reached the physical limitations of the scaling. These limitations have fuelled the search for alternative active materials (for transistors) and the introduction of optical interconnects (called “Si photonics”). A series of attempts to circumvent the Si technology limits are based on the use of III-V compound semiconductor due to their superior benefits, such as high electron mobility and direct bandgap. To use their physical properties on a Si platform, the formation of high-quality III-V films on the Si (III-V/Si) is the basic technology ; however, implementing this technology using a high-throughput process is not easy. Here, we report new concepts for an ultra-high-throughput heterogeneous integration of high-quality III-V films on the Si using the wafer bonding and epitaxial lift off (ELO) technique. We describe the ultra-fast ELO and also the re-use of the III-V donor wafer after III-V/Si formation. These approaches provide an ultra-high-throughput fabrication of III-V/Si substrates with a high-quality film, which leads to a dramatic cost reduction. As proof-of-concept devices, this paper demonstrates GaAs-based high electron mobility transistors (HEMTs), solar cells, and hetero-junction phototransistors on Si substrates. PMID:26864968
Wang, Xixian; Ren, Lihui; Su, Yetian; Ji, Yuetong; Liu, Yaoping; Li, Chunyu; Li, Xunrong; Zhang, Yi; Wang, Wei; Hu, Qiang; Han, Danxiang; Xu, Jian; Ma, Bo
2017-11-21
Raman-activated cell sorting (RACS) has attracted increasing interest, yet throughput remains one major factor limiting its broader application. Here we present an integrated Raman-activated droplet sorting (RADS) microfluidic system for functional screening of live cells in a label-free and high-throughput manner, by employing AXT-synthetic industrial microalga Haematococcus pluvialis (H. pluvialis) as a model. Raman microspectroscopy analysis of individual cells is carried out prior to their microdroplet encapsulation, which is then directly coupled to DEP-based droplet sorting. To validate the system, H. pluvialis cells containing different levels of AXT were mixed and underwent RADS. Those AXT-hyperproducing cells were sorted with an accuracy of 98.3%, an enrichment ratio of eight folds, and a throughput of ∼260 cells/min. Of the RADS-sorted cells, 92.7% remained alive and able to proliferate, which is equivalent to the unsorted cells. Thus, the RADS achieves a much higher throughput than existing RACS systems, preserves the vitality of cells, and facilitates seamless coupling with downstream manipulations such as single-cell sequencing and cultivation.
High throughput imaging cytometer with acoustic focussing.
Zmijan, Robert; Jonnalagadda, Umesh S; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn; Glynne-Jones, Peter
2015-10-31
We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint.
NASA Astrophysics Data System (ADS)
Yu, Hao Yun; Liu, Chun-Hung; Shen, Yu Tian; Lee, Hsuan-Ping; Tsai, Kuen Yu
2014-03-01
Line edge roughness (LER) influencing the electrical performance of circuit components is a key challenge for electronbeam lithography (EBL) due to the continuous scaling of technology feature sizes. Controlling LER within an acceptable tolerance that satisfies International Technology Roadmap for Semiconductors requirements while achieving high throughput become a challenging issue. Although lower dosage and more-sensitive resist can be used to improve throughput, they would result in serious LER-related problems because of increasing relative fluctuation in the incident positions of electrons. Directed self-assembly (DSA) is a promising technique to relax LER-related pattern fidelity (PF) requirements because of its self-healing ability, which may benefit throughput. To quantify the potential of throughput improvement in EBL by introducing DSA for post healing, rigorous numerical methods are proposed to simultaneously maximize throughput by adjusting writing parameters of EBL systems subject to relaxed LER-related PF requirements. A fast, continuous model for parameter sweeping and a hybrid model for more accurate patterning prediction are employed for the patterning simulation. The tradeoff between throughput and DSA self-healing ability is investigated. Preliminary results indicate that significant throughput improvements are achievable at certain process conditions.
Asif, Muhammad; Guo, Xiangzhou; Zhang, Jing; Miao, Jungang
2018-04-17
Digital cross-correlation is central to many applications including but not limited to Digital Image Processing, Satellite Navigation and Remote Sensing. With recent advancements in digital technology, the computational demands of such applications have increased enormously. In this paper we are presenting a high throughput digital cross correlator, capable of processing 1-bit digitized stream, at the rate of up to 2 GHz, simultaneously on 64 channels i.e., approximately 4 Trillion correlation and accumulation operations per second. In order to achieve higher throughput, we have focused on frequency based partitioning of our design and tried to minimize and localize high frequency operations. This correlator is designed for a Passive Millimeter Wave Imager intended for the detection of contraband items concealed on human body. The goals are to increase the system bandwidth, achieve video rate imaging, improve sensitivity and reduce the size. Design methodology is detailed in subsequent sections, elaborating the techniques enabling high throughput. The design is verified for Xilinx Kintex UltraScale device in simulation and the implementation results are given in terms of device utilization and power consumption estimates. Our results show considerable improvements in throughput as compared to our baseline design, while the correlator successfully meets the functional requirements.
USDA-ARS?s Scientific Manuscript database
Recent developments in high-throughput sequencing technology have made low-cost sequencing an attractive approach for many genome analysis tasks. Increasing read lengths, improving quality and the production of increasingly larger numbers of usable sequences per instrument-run continue to make whole...
USDA-ARS?s Scientific Manuscript database
High-throughput phenotyping platforms (HTPPs) provide novel opportunities to more effectively dissect the genetic basis of drought-adaptive traits. This genome-wide association study (GWAS) compares the results obtained with two Unmanned Aerial Vehicles (UAVs) and a ground-based platform used to mea...
USDA-ARS?s Scientific Manuscript database
The ability to rapidly screen a large number of individuals is the key to any successful plant breeding program. One of the primary bottlenecks in high throughput screening is the preparation of DNA samples, particularly the quantification and normalization of samples for downstream processing. A ...
Little information is available regarding the potential for many commercial chemicals to induce developmental toxicity. The mESC Adherent Cell Differentiation and Cytoxicity (ACDC) assay is a high-throughput screen used to close this data gap. Thus, ToxCast™ Phase I chemicals wer...
USDA-ARS?s Scientific Manuscript database
The Axiom® IStraw90 SNP (single nucleotide polymorphism) array was developed to enable high-throughput genotyping in allo-octoploid cultivated strawberry (Fragaria ×ananassa). However, high cost ($80-105 per sample) limits throughput for certain applications. On average the IStraw90 has yielded 50% ...
Confirmation of Test Chemicals Identified by a High-Throughput Screen (HTPS) as Sodium Iodide Symporter (NIS) Inhibitors in FRTL-5 Model S. Laws1, A. Buckalew1, J. Wang2, D. Hallinger1, A. Murr1, and T. Stoker1. 1Endocrin...
Adverse outcome pathways (AOP) link known population outcomes to a molecular initiating event (MIE) that can be quantified using high-throughput in vitro methods. Practical application of AOPs in chemical-specific risk assessment requires consideration of exposure and absorption,...
Inhibition of Retinoblastoma Protein Inactivation
2016-09-01
Retinoblastoma protein, E2F transcription factor, high throughput screen, drug discovery, x-ray crystallography 16. SECURITY CLASSIFICATION OF: 17...developed a method to perform fragment based screening by x-ray crystallography . 2.0 KEYWORDS Retinoblastoma (Rb) pathway, E2F transcription factor...cancer, cell-cycle inhibition, activation, modulation, inhibition, high throughput screening, fragment-based screening, x-ray crystallography
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environ...
USDA-ARS?s Scientific Manuscript database
A high-throughput transformation system previously developed in our laboratory was used for the regeneration of transgenic plum plants without the use of antibiotic selection. The system was first tested with two experimental constructs, pGA482GGi and pCAMBIAgfp94(35S), that contain selective marke...
Although progress has been made with HTS (high throughput screening) in profiling biological activity (e.g., EPA’s ToxCast™), challenges arise interpreting HTS results in the context of adversity & converting HTS assay concentrations to equivalent human doses for the broad domain...
Some pharmaceuticals and environmental chemicals bind the thyroid peroxidase (TPO) enzyme and disrupt thyroid hormone production. The potential for TPO inhibition is a function of both the binding affinity and concentration of the chemical within the thyroid gland. The former can...
Gene expression with ontologic enrichment and connectivity mapping tools is widely used to infer modes of action (MOA) for therapeutic drugs. Despite progress in high-throughput (HT) genomic systems, strategies suitable to identify industrial chemical MOA are needed. The L1000 is...
Microarray profiling of chemical-induced effects is being increasingly used in medium and high-throughput formats. In this study, we describe computational methods to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), ...
Here, we present results of an approach for risk-based prioritization using the Threshold of Toxicological Concern (TTC) combined with high-throughput exposure (HTE) modelling. We started with 7968 chemicals with calculated population median oral daily intakes characterized by an...
USDA-ARS?s Scientific Manuscript database
We assessed the genetic diversity and population structure among 148 cultivated lettuce (Lactuca sativa L.) accessions using the high-throughput GoldenGate assay and 384 EST (Expressed Sequence Tag)-derived SNP (single nucleotide polymorphism) markers. A custom OPA (Oligo Pool All), LSGermOPA was fo...
Efficient and accurate adverse outcome pathway (AOP) based high-throughput screening (HTS) methods use a systems biology based approach to computationally model in vitro cellular and molecular data for rapid chemical prioritization; however, not all HTS assays are grounded by rel...
Using Adverse Outcome Pathway Analysis to Guide Development of High-Throughput Screening Assays for Thyroid-Disruptors Katie B. Paul1,2, Joan M. Hedge2, Daniel M. Rotroff4, Kevin M. Crofton4, Michael W. Hornung3, Steven O. Simmons2 1Oak Ridge Institute for Science Education Post...
A novel assay for monoacylglycerol hydrolysis suitable for high-throughput screening.
Brengdahl, Johan; Fowler, Christopher J
2006-12-01
A simple assay for monoacylglycerol hydrolysis suitable for high-throughput screening is described. The assay uses [(3)H]2-oleoylglycerol as substrate, with the tritium label in the glycerol part of the molecule and the use of phenyl sepharose gel to separate the hydrolyzed product ([(3)H]glycerol) from substrate. Using cytosolic fractions derived from rat cerebella as a source of hydrolytic activity, the assay gives the appropriate pH profile and sensitivity to inhibition with compounds known to inhibit hydrolysis of this substrate. The assay could also be adapted to a 96-well plate format, using C6 cells as the source of hydrolytic activity. Thus the assay is simple and appropriate for high-throughput screening of inhibitors of monoacylglycerol hydrolysis.
Method and apparatus for maximizing throughput of indirectly heated rotary kilns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coates, Ralph L; Smoot, Douglas L.; Hatfield, Kent E
An apparatus and method for achieving improved throughput capacity of indirectly heated rotary kilns used to produce pyrolysis products such as shale oils or coal oils that are susceptible to decomposition by high kiln wall temperatures is disclosed. High throughput is achieved by firing the kiln such that optimum wall temperatures are maintained beginning at the point where the materials enter the heating section of the kiln and extending to the point where the materials leave the heated section. Multiple high velocity burners are arranged such that combustion products directly impact on the area of the kiln wall covered internallymore » by the solid material being heated. Firing rates for the burners are controlled to maintain optimum wall temperatures.« less
From cancer genomes to cancer models: bridging the gaps
Baudot, Anaïs; Real, Francisco X.; Izarzugaza, José M. G.; Valencia, Alfonso
2009-01-01
Cancer genome projects are now being expanded in an attempt to provide complete landscapes of the mutations that exist in tumours. Although the importance of cataloguing genome variations is well recognized, there are obvious difficulties in bridging the gaps between high-throughput resequencing information and the molecular mechanisms of cancer evolution. Here, we describe the current status of the high-throughput genomic technologies, and the current limitations of the associated computational analysis and experimental validation of cancer genetic variants. We emphasize how the current cancer-evolution models will be influenced by the high-throughput approaches, in particular through efforts devoted to monitoring tumour progression, and how, in turn, the integration of data and models will be translated into mechanistic knowledge and clinical applications. PMID:19305388
Gerrard, Gareth; Valgañón, Mikel; Foong, Hui En; Kasperaviciute, Dalia; Iskander, Deena; Game, Laurence; Müller, Michael; Aitman, Timothy J; Roberts, Irene; de la Fuente, Josu; Foroni, Letizia; Karadimitris, Anastasios
2013-08-01
Diamond-Blackfan anaemia (DBA) is caused by inactivating mutations in ribosomal protein (RP) genes, with mutations in 13 of the 80 RP genes accounting for 50-60% of cases. The remaining 40-50% cases may harbour mutations in one of the remaining RP genes, but the very low frequencies render conventional genetic screening as challenging. We, therefore, applied custom enrichment technology combined with high-throughput sequencing to screen all 80 RP genes. Using this approach, we identified and validated inactivating mutations in 15/17 (88%) DBA patients. Target enrichment combined with high-throughput sequencing is a robust and improved methodology for the genetic diagnosis of DBA. © 2013 John Wiley & Sons Ltd.
The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio).
Usui, Takuji; Noble, Daniel W A; O'Dea, Rose E; Fangmeier, Melissa L; Lagisz, Malgorzata; Hesselson, Daniel; Nakagawa, Shinichi
2018-01-01
Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34-0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes.
Optimizing multi-dimensional high throughput screening using zebrafish.
Truong, Lisa; Bugel, Sean M; Chlebowski, Anna; Usenko, Crystal Y; Simonich, Michael T; Simonich, Staci L Massey; Tanguay, Robert L
2016-10-01
The use of zebrafish for high throughput screening (HTS) for chemical bioactivity assessments is becoming routine in the fields of drug discovery and toxicology. Here we report current recommendations from our experiences in zebrafish HTS. We compared the effects of different high throughput chemical delivery methods on nominal water concentration, chemical sorption to multi-well polystyrene plates, transcription responses, and resulting whole animal responses. We demonstrate that digital dispensing consistently yields higher data quality and reproducibility compared to standard plastic tip-based liquid handling. Additionally, we illustrate the challenges in using this sensitive model for chemical assessment when test chemicals have trace impurities. Adaptation of these better practices for zebrafish HTS should increase reproducibility across laboratories. Copyright © 2016 Elsevier Inc. All rights reserved.
Overcoming bias and systematic errors in next generation sequencing data.
Taub, Margaret A; Corrada Bravo, Hector; Irizarry, Rafael A
2010-12-10
Considerable time and effort has been spent in developing analysis and quality assessment methods to allow the use of microarrays in a clinical setting. As is the case for microarrays and other high-throughput technologies, data from new high-throughput sequencing technologies are subject to technological and biological biases and systematic errors that can impact downstream analyses. Only when these issues can be readily identified and reliably adjusted for will clinical applications of these new technologies be feasible. Although much work remains to be done in this area, we describe consistently observed biases that should be taken into account when analyzing high-throughput sequencing data. In this article, we review current knowledge about these biases, discuss their impact on analysis results, and propose solutions.
Loeffler 4.0: Diagnostic Metagenomics.
Höper, Dirk; Wylezich, Claudia; Beer, Martin
2017-01-01
A new world of possibilities for "virus discovery" was opened up with high-throughput sequencing becoming available in the last decade. While scientifically metagenomic analysis was established before the start of the era of high-throughput sequencing, the availability of the first second-generation sequencers was the kick-off for diagnosticians to use sequencing for the detection of novel pathogens. Today, diagnostic metagenomics is becoming the standard procedure for the detection and genetic characterization of new viruses or novel virus variants. Here, we provide an overview about technical considerations of high-throughput sequencing-based diagnostic metagenomics together with selected examples of "virus discovery" for animal diseases or zoonoses and metagenomics for food safety or basic veterinary research. © 2017 Elsevier Inc. All rights reserved.
The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio)
Usui, Takuji; Noble, Daniel W.A.; O’Dea, Rose E.; Fangmeier, Melissa L.; Lagisz, Malgorzata; Hesselson, Daniel
2018-01-01
Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34–0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes. PMID:29372124
Method and apparatus for maximizing throughput of indirectly heated rotary kilns
Coates, Ralph L; Smoot, L. Douglas; Hatfield, Kent E
2012-10-30
An apparatus and method for achieving improved throughput capacity of indirectly heated rotary kilns used to produce pyrolysis products such as shale oils or coal oils that are susceptible to decomposition by high kiln wall temperatures is disclosed. High throughput is achieved by firing the kiln such that optimum wall temperatures are maintained beginning at the point where the materials enter the heating section of the kiln and extending to the point where the materials leave the heated section. Multiple high velocity burners are arranged such that combustion products directly impact on the area of the kiln wall covered internally by the solid material being heated. Firing rates for the burners are controlled to maintain optimum wall temperatures.
Ernstsen, Christina L; Login, Frédéric H; Jensen, Helene H; Nørregaard, Rikke; Møller-Jensen, Jakob; Nejsum, Lene N
2017-10-01
Quantification of intracellular bacterial colonies is useful in strategies directed against bacterial attachment, subsequent cellular invasion and intracellular proliferation. An automated, high-throughput microscopy-method was established to quantify the number and size of intracellular bacterial colonies in infected host cells (Detection and quantification of intracellular bacterial colonies by automated, high-throughput microscopy, Ernstsen et al., 2017 [1]). The infected cells were imaged with a 10× objective and number of intracellular bacterial colonies, their size distribution and the number of cell nuclei were automatically quantified using a spot detection-tool. The spot detection-output was exported to Excel, where data analysis was performed. In this article, micrographs and spot detection data are made available to facilitate implementation of the method.
Cernak, Tim; Gesmundo, Nathan J; Dykstra, Kevin; Yu, Yang; Wu, Zhicai; Shi, Zhi-Cai; Vachal, Petr; Sperbeck, Donald; He, Shuwen; Murphy, Beth Ann; Sonatore, Lisa; Williams, Steven; Madeira, Maria; Verras, Andreas; Reiter, Maud; Lee, Claire Heechoon; Cuff, James; Sherer, Edward C; Kuethe, Jeffrey; Goble, Stephen; Perrotto, Nicholas; Pinto, Shirly; Shen, Dong-Ming; Nargund, Ravi; Balkovec, James; DeVita, Robert J; Dreher, Spencer D
2017-05-11
Miniaturization and parallel processing play an important role in the evolution of many technologies. We demonstrate the application of miniaturized high-throughput experimentation methods to resolve synthetic chemistry challenges on the frontlines of a lead optimization effort to develop diacylglycerol acyltransferase (DGAT1) inhibitors. Reactions were performed on ∼1 mg scale using glass microvials providing a miniaturized high-throughput experimentation capability that was used to study a challenging S N Ar reaction. The availability of robust synthetic chemistry conditions discovered in these miniaturized investigations enabled the development of structure-activity relationships that ultimately led to the discovery of soluble, selective, and potent inhibitors of DGAT1.
The next generation CdTe technology- Substrate foil based solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferekides, Chris
The main objective of this project was the development of one of the most promising Photovoltaic (PV) materials CdTe into a versatile, cost effective, and high throughput technology, by demonstrating substrate devices on foil substrates using high throughput fabrication conditions. The typical CdTe cell is of the superstrate configuration where the solar cell is fabricated on a glass superstrate by the sequential deposition of a TCO, n-type heterojunction partner, p-CdTe absorber, and back contact. Large glass modules are heavy and present significant challenges during manufacturing (uniform heating, etc.). If a substrate CdTe cell could be developed (the main goal ofmore » this project) a roll-to-toll high throughput technology could be developed.« less
Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter
2015-01-01
Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Yusuke; Hiraki, Masahiko; Sasajima, Kumiko
2010-06-23
Recent advances in high-throughput techniques for macromolecular crystallography have highlighted the importance of structure-based drug design (SBDD), and the demand for synchrotron use by pharmaceutical researchers has increased. Thus, in collaboration with Astellas Pharma Inc., we have constructed a new high-throughput macromolecular crystallography beamline, AR-NE3A, which is dedicated to SBDD. At AR-NE3A, a photon flux up to three times higher than those at existing high-throughput beams at the Photon Factory, AR-NW12A and BL-5A, can be realized at the same sample positions. Installed in the experimental hutch are a high-precision diffractometer, fast-readout, high-gain CCD detector, and sample exchange robot capable ofmore » handling more than two hundred cryo-cooled samples stored in a Dewar. To facilitate high-throughput data collection required for pharmaceutical research, fully automated data collection and processing systems have been developed. Thus, sample exchange, centering, data collection, and data processing are automatically carried out based on the user's pre-defined schedule. Although Astellas Pharma Inc. has a priority access to AR-NE3A, the remaining beam time is allocated to general academic and other industrial users.« less
proBAMconvert: A Conversion Tool for proBAM/proBed.
Olexiouk, Volodimir; Menschaert, Gerben
2017-07-07
The introduction of new standard formats, proBAM and proBed, improves the integration of genomics and proteomics information, thus aiding proteogenomics applications. These novel formats enable peptide spectrum matches (PSM) to be stored, inspected, and analyzed within the context of the genome. However, an easy-to-use and transparent tool to convert mass spectrometry identification files to these new formats is indispensable. proBAMconvert enables the conversion of common identification file formats (mzIdentML, mzTab, and pepXML) to proBAM/proBed using an intuitive interface. Furthermore, ProBAMconvert enables information to be output both at the PSM and peptide levels and has a command line interface next to the graphical user interface. Detailed documentation and a completely worked-out tutorial is available at http://probam.biobix.be .
Workshop Background and Summary of Webinars (IVIVE workshop)
Toxicokinetics (TK) provides a bridge between hazard and exposure by predicting tissue concentrations due to exposure. Higher throughput toxicokinetics (HTTK) appears to provide essential data to established context for in vitro bioactivity data obtained through high throughput ...
Shahini, Mehdi; Yeow, John T W
2011-08-12
We report on the enhancement of electrical cell lysis using carbon nanotubes (CNTs). Electrical cell lysis systems are widely utilized in microchips as they are well suited to integration into lab-on-a-chip devices. However, cell lysis based on electrical mechanisms has high voltage requirements. Here, we demonstrate that by incorporating CNTs into microfluidic electrolysis systems, the required voltage for lysis is reduced by half and the lysis throughput at low voltages is improved by ten times, compared to non-CNT microchips. In our experiment, E. coli cells are lysed while passing through an electric field in a microchannel. Based on the lightning rod effect, the electric field strengthened at the tip of the CNTs enhances cell lysis at lower voltage and higher throughput. This approach enables easy integration of cell lysis with other on-chip high-throughput sample-preparation processes.
Gapped Spectral Dictionaries and Their Applications for Database Searches of Tandem Mass Spectra*
Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno; Pevzner, Pavel A.
2011-01-01
Generating all plausible de novo interpretations of a peptide tandem mass (MS/MS) spectrum (Spectral Dictionary) and quickly matching them against the database represent a recently emerged alternative approach to peptide identification. However, the sizes of the Spectral Dictionaries quickly grow with the peptide length making their generation impractical for long peptides. We introduce Gapped Spectral Dictionaries (all plausible de novo interpretations with gaps) that can be easily generated for any peptide length thus addressing the limitation of the Spectral Dictionary approach. We show that Gapped Spectral Dictionaries are small thus opening a possibility of using them to speed-up MS/MS searches. Our MS-GappedDictionary algorithm (based on Gapped Spectral Dictionaries) enables proteogenomics applications (such as searches in the six-frame translation of the human genome) that are prohibitively time consuming with existing approaches. MS-GappedDictionary generates gapped peptides that occupy a niche between accurate but short peptide sequence tags and long but inaccurate full length peptide reconstructions. We show that, contrary to conventional wisdom, some high-quality spectra do not have good peptide sequence tags and introduce gapped tags that have advantages over the conventional peptide sequence tags in MS/MS database searches. PMID:21444829
High-throughput Titration of Luciferase-expressing Recombinant Viruses
Garcia, Vanessa; Krishnan, Ramya; Davis, Colin; Batenchuk, Cory; Le Boeuf, Fabrice; Abdelbary, Hesham; Diallo, Jean-Simon
2014-01-01
Standard plaque assays to determine infectious viral titers can be time consuming, are not amenable to a high volume of samples, and cannot be done with viruses that do not form plaques. As an alternative to plaque assays, we have developed a high-throughput titration method that allows for the simultaneous titration of a high volume of samples in a single day. This approach involves infection of the samples with a Firefly luciferase tagged virus, transfer of the infected samples onto an appropriate permissive cell line, subsequent addition of luciferin, reading of plates in order to obtain luminescence readings, and finally the conversion from luminescence to viral titers. The assessment of cytotoxicity using a metabolic viability dye can be easily incorporated in the workflow in parallel and provide valuable information in the context of a drug screen. This technique provides a reliable, high-throughput method to determine viral titers as an alternative to a standard plaque assay. PMID:25285536
3D pulsed laser-triggered high-speed microfluidic fluorescence-activated cell sorter
Chen, Yue; Wu, Ting-Hsiang; Kung, Yu-Chun; Teitell, Michael A.; Chiou, Pei-Yu
2014-01-01
We report a 3D microfluidic pulsed laser-triggered fluorescence-activated cell sorter capable of sorting at a throughput of 23,000 cells sec−1 with 90% purity in high-purity mode and at a throughput of 45,000 cells sec−1 with 45% purity in enrichment mode in one stage and in a single channel. This performance is realized by exciting laser-induced cavitation bubbles in a 3D PDMS microfluidic channel to generate high-speed liquid jets that deflect detected fluorescent cells and particles focused by 3D sheath flows. The ultrafast switching mechanism (20 μsec complete on-off cycle), small liquid jet perturbation volume, and three-dimensional sheath flow focusing for accurate timing control of fast (1.5 m sec−1) passing cells and particles are three critical factors enabling high-purity sorting at high-throughput in this sorter. PMID:23844418
Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin
2015-01-01
Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks. PMID:26501283
NASA Technical Reports Server (NTRS)
Lee, Jonathan A.
2005-01-01
High-throughput measurement techniques are reviewed for solid phase transformation from materials produced by combinatorial methods, which are highly efficient concepts to fabricate large variety of material libraries with different compositional gradients on a single wafer. Combinatorial methods hold high potential for reducing the time and costs associated with the development of new materials, as compared to time-consuming and labor-intensive conventional methods that test large batches of material, one- composition at a time. These high-throughput techniques can be automated to rapidly capture and analyze data, using the entire material library on a single wafer, thereby accelerating the pace of materials discovery and knowledge generation for solid phase transformations. The review covers experimental techniques that are applicable to inorganic materials such as shape memory alloys, graded materials, metal hydrides, ferric materials, semiconductors and industrial alloys.
Kizaki, Seiichiro; Chandran, Anandhakumar; Sugiyama, Hiroshi
2016-03-02
Tet (ten-eleven translocation) family proteins have the ability to oxidize 5-methylcytosine (mC) to 5-hydroxymethylcytosine (hmC), 5-formylcytosine (fC), and 5-carboxycytosine (caC). However, the oxidation reaction of Tet is not understood completely. Evaluation of genomic-level epigenetic changes by Tet protein requires unbiased identification of the highly selective oxidation sites. In this study, we used high-throughput sequencing to investigate the sequence specificity of mC oxidation by Tet1. A 6.6×10(4) -member mC-containing random DNA-sequence library was constructed. The library was subjected to Tet-reactive pulldown followed by high-throughput sequencing. Analysis of the obtained sequence data identified the Tet1-reactive sequences. We identified mCpG as a highly reactive sequence of Tet1 protein. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin
2015-10-16
Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks.
Ramanathan, Ragu; Ghosal, Anima; Ramanathan, Lakshmi; Comstock, Kate; Shen, Helen; Ramanathan, Dil
2018-05-01
Evaluation of HPLC-high-resolution mass spectrometry (HPLC-HRMS) full scan with polarity switching for increasing throughput of human in vitro cocktail drug-drug interaction assay. Microsomal incubates were analyzed using a high resolution and high mass accuracy Q-Exactive mass spectrometer to collect integrated qualitative and quantitative (qual/quant) data. Within assay, positive-to-negative polarity switching HPLC-HRMS method allowed quantification of eight and two probe compounds in the positive and negative ionization modes, respectively, while monitoring for LOR and its metabolites. LOR-inhibited CYP2C19 and showed higher activity for CYP2D6, CYP2E1 and CYP3A4. Overall, LC-HRMS-based nontargeted full scan quantitation allowed to improve the throughput of the in vitro cocktail drug-drug interaction assay.
A high-throughput microRNA expression profiling system.
Guo, Yanwen; Mastriano, Stephen; Lu, Jun
2014-01-01
As small noncoding RNAs, microRNAs (miRNAs) regulate diverse biological functions, including physiological and pathological processes. The expression and deregulation of miRNA levels contain rich information with diagnostic and prognostic relevance and can reflect pharmacological responses. The increasing interest in miRNA-related research demands global miRNA expression profiling on large numbers of samples. We describe here a robust protocol that supports high-throughput sample labeling and detection on hundreds of samples simultaneously. This method employs 96-well-based miRNA capturing from total RNA samples and on-site biochemical reactions, coupled with bead-based detection in 96-well format for hundreds of miRNAs per sample. With low-cost, high-throughput, high detection specificity, and flexibility to profile both small and large numbers of samples, this protocol can be adapted in a wide range of laboratory settings.
tcpl: the ToxCast pipeline for high-throughput screening data.
Filer, Dayne L; Kothiya, Parth; Setzer, R Woodrow; Judson, Richard S; Martin, Matthew T
2017-02-15
Large high-throughput screening (HTS) efforts are widely used in drug development and chemical toxicity screening. Wide use and integration of these data can benefit from an efficient, transparent and reproducible data pipeline. Summary: The tcpl R package and its associated MySQL database provide a generalized platform for efficiently storing, normalizing and dose-response modeling of large high-throughput and high-content chemical screening data. The novel dose-response modeling algorithm has been tested against millions of diverse dose-response series, and robustly fits data with outliers and cytotoxicity-related signal loss. tcpl is freely available on the Comprehensive R Archive Network under the GPL-2 license. martin.matt@epa.gov. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
Noyes, Aaron; Huffman, Ben; Godavarti, Ranga; Titchener-Hooker, Nigel; Coffman, Jonathan; Sunasara, Khurram; Mukhopadhyay, Tarit
2015-08-01
The biotech industry is under increasing pressure to decrease both time to market and development costs. Simultaneously, regulators are expecting increased process understanding. High throughput process development (HTPD) employs small volumes, parallel processing, and high throughput analytics to reduce development costs and speed the development of novel therapeutics. As such, HTPD is increasingly viewed as integral to improving developmental productivity and deepening process understanding. Particle conditioning steps such as precipitation and flocculation may be used to aid the recovery and purification of biological products. In this first part of two articles, we describe an ultra scale-down system (USD) for high throughput particle conditioning (HTPC) composed of off-the-shelf components. The apparatus is comprised of a temperature-controlled microplate with magnetically driven stirrers and integrated with a Tecan liquid handling robot. With this system, 96 individual reaction conditions can be evaluated in parallel, including downstream centrifugal clarification. A comprehensive suite of high throughput analytics enables measurement of product titer, product quality, impurity clearance, clarification efficiency, and particle characterization. HTPC at the 1 mL scale was evaluated with fermentation broth containing a vaccine polysaccharide. The response profile was compared with the Pilot-scale performance of a non-geometrically similar, 3 L reactor. An engineering characterization of the reactors and scale-up context examines theoretical considerations for comparing this USD system with larger scale stirred reactors. In the second paper, we will explore application of this system to industrially relevant vaccines and test different scale-up heuristics. © 2015 Wiley Periodicals, Inc.
Hubble, Lee J; Cooper, James S; Sosa-Pintos, Andrea; Kiiveri, Harri; Chow, Edith; Webster, Melissa S; Wieczorek, Lech; Raguse, Burkhard
2015-02-09
Chemiresistor sensor arrays are a promising technology to replace current laboratory-based analysis instrumentation, with the advantage of facile integration into portable, low-cost devices for in-field use. To increase the performance of chemiresistor sensor arrays a high-throughput fabrication and screening methodology was developed to assess different organothiol-functionalized gold nanoparticle chemiresistors. This high-throughput fabrication and testing methodology was implemented to screen a library consisting of 132 different organothiol compounds as capping agents for functionalized gold nanoparticle chemiresistor sensors. The methodology utilized an automated liquid handling workstation for the in situ functionalization of gold nanoparticle films and subsequent automated analyte testing of sensor arrays using a flow-injection analysis system. To test the methodology we focused on the discrimination and quantitation of benzene, toluene, ethylbenzene, p-xylene, and naphthalene (BTEXN) mixtures in water at low microgram per liter concentration levels. The high-throughput methodology identified a sensor array configuration consisting of a subset of organothiol-functionalized chemiresistors which in combination with random forests analysis was able to predict individual analyte concentrations with overall root-mean-square errors ranging between 8-17 μg/L for mixtures of BTEXN in water at the 100 μg/L concentration. The ability to use a simple sensor array system to quantitate BTEXN mixtures in water at the low μg/L concentration range has direct and significant implications to future environmental monitoring and reporting strategies. In addition, these results demonstrate the advantages of high-throughput screening to improve the performance of gold nanoparticle based chemiresistors for both new and existing applications.
High-throughput spectrometer designs in a compact form-factor: principles and applications
NASA Astrophysics Data System (ADS)
Norton, S. M.
2013-05-01
Many compact, portable Raman spectrometers have entered the market in the past few years with applications in narcotics and hazardous material identification, as well as verification applications in pharmaceuticals and security screening. Often, the required compact form-factor has forced designers to sacrifice throughput and sensitivity for portability and low-cost. We will show that a volume phase holographic (VPH)-based spectrometer design can achieve superior throughput and thus sensitivity over conventional Czerny-Turner reflective designs. We will look in depth at the factors influencing throughput and sensitivity and illustrate specific VPH-based spectrometer examples that highlight these design principles.
Nile Red Detection of Bacterial Hydrocarbons and Ketones in a High-Throughput Format
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinzon, NM; Aukema, KG; Gralnick, JA
A method for use in high-throughput screening of bacteria for the production of long-chain hydrocarbons and ketones by monitoring fluorescent light emission in the presence of Nile red is described. Nile red has previously been used to screen for polyhydroxybutyrate (PHB) and fatty acid esters, but this is the first report of screening for recombinant bacteria making hydrocarbons or ketones. The microtiter plate assay was evaluated using wild-type and recombinant strains of Shewanella oneidensis and Escherichia coli expressing the enzyme OleA, previously shown to initiate hydrocarbon biosynthesis. The strains expressing exogenous Stenotrophomonas maltophilia oleA, with increased levels of ketone productionmore » as determined by gas chromatography-mass spectrometry, were distinguished with Nile red fluorescence. Confocal microscopy images of S. oneidensis oleA-expressing strains stained with Nile red were consistent with a membrane localization of the ketones. This differed from Nile red staining of bacterial PHB or algal lipid droplets that showed intracellular inclusion bodies. These results demonstrated the applicability of Nile red in a high-throughput technique for the detection of bacterial hydrocarbons and ketones. IMPORTANCE In recent years, there has been renewed interest in advanced biofuel sources such as bacterial hydrocarbon production. Previous studies used solvent extraction of bacterial cultures followed by gas chromatography-mass spectrometry (GC-MS) to detect and quantify ketones and hydrocarbons (Beller HR, Goh EB, Keasling JD, Appl. Environ. Microbiol. 76: 1212-1223, 2010; Sukovich DJ, Seffernick JL, Richman JE, Gralnick JA, Wackett LP, Appl. Environ. Microbiol. 76: 3850-3862, 2010). While these analyses are powerful and accurate, their labor-intensive nature makes them intractable to high-throughput screening; therefore, methods for rapid identification of bacterial strains that are overproducing hydrocarbons are needed. The use of high-throughput evaluation of bacterial and algal hydrophobic molecule production via Nile red fluorescence from lipids and esters was extended in this study to include hydrocarbons and ketones. This work demonstrated accurate, high-throughput detection of high-level bacterial long-chain ketone and hydrocarbon production by screening for increased fluorescence of the hydrophobic dye Nile red.« less
Higher Throughput Toxicokinetics to Allow Extrapolation (EPA-Japan Bilateral EDSP meeting)
As part of "Ongoing EDSP Directions & Activities" I will present CSS research on high throughput toxicokinetics, including in vitro data and models to allow rapid determination of the real world doses that may cause endocrine disruption.
USDA-ARS?s Scientific Manuscript database
This study demonstrated the application of an automated high-throughput mini-cartridge solid-phase extraction (mini-SPE) cleanup for the rapid low-pressure gas chromatography – tandem mass spectrometry (LPGC-MS/MS) analysis of pesticides and environmental contaminants in QuEChERS extracts of foods. ...
From drug to protein: using yeast genetics for high-throughput target discovery.
Armour, Christopher D; Lum, Pek Yee
2005-02-01
The budding yeast Saccharomyces cerevisiae has long been an effective eukaryotic model system for understanding basic cellular processes. The genetic tractability and ease of manipulation in the laboratory make yeast well suited for large-scale chemical and genetic screens. Several recent studies describing the use of yeast genetics for high-throughput drug target identification are discussed in this review.
Quantitative high-throughput population dynamics in continuous-culture by automated microscopy.
Merritt, Jason; Kuehn, Seppe
2016-09-12
We present a high-throughput method to measure abundance dynamics in microbial communities sustained in continuous-culture. Our method uses custom epi-fluorescence microscopes to automatically image single cells drawn from a continuously-cultured population while precisely controlling culture conditions. For clonal populations of Escherichia coli our instrument reveals history-dependent resilience and growth rate dependent aggregation.
The use of high-throughput in vitro assays has been proposed to play a significant role in the future of toxicity testing. In this study, rat hepatic metabolic clearance and plasma protein binding were measured for 59 ToxCast phase I chemicals. Computational in vitro-to-in vivo e...
2016-06-01
unlimited. v List of Tables Table 1 Single-lap-joint experimental parameters ..............................................7 Table 2 Survey ...Joints: Experimental and Workflow Protocols by Robert E Jensen, Daniel C DeSchepper, and David P Flanagan Approved for...TR-7696 ● JUNE 2016 US Army Research Laboratory Multivariate Analysis of High Through-Put Adhesively Bonded Single Lap Joints: Experimental
Detecting adulterants in milk powder using high-throughput Raman chemical imaging
USDA-ARS?s Scientific Manuscript database
This study used a line-scan high-throughput Raman imaging system to authenticate milk powder. A 5 W 785 nm line laser (240 mm long and 1 mm wide) was used as a Raman excitation source. The system was used to acquire hyperspectral Raman images in a wavenumber range of 103–2881 cm-1 from the skim milk...
USDA-ARS?s Scientific Manuscript database
Milk is a vulnerable target for economically motivated adulteration. In this study, a line-scan high-throughput Raman imaging system was used to authenticate milk powder. A 5 W 785 nm line laser (240 mm long and 1 mm wide) was used as a Raman excitation source. The system was used to acquire hypersp...
D. Lee Taylor; Michael G. Booth; Jack W. McFarland; Ian C. Herriott; Niall J. Lennon; Chad Nusbaum; Thomas G. Marr
2008-01-01
High throughput sequencing methods are widely used in analyses of microbial diversity but are generally applied to small numbers of samples, which precludes charaterization of patterns of microbial diversity across space and time. We have designed a primer-tagging approach that allows pooling and subsequent sorting of numerous samples, which is directed to...
Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio
2017-10-24
High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.
Cai, Jinhai; Okamoto, Mamoru; Atieno, Judith; Sutton, Tim; Li, Yongle; Miklavcic, Stanley J.
2016-01-01
Leaf senescence, an indicator of plant age and ill health, is an important phenotypic trait for the assessment of a plant’s response to stress. Manual inspection of senescence, however, is time consuming, inaccurate and subjective. In this paper we propose an objective evaluation of plant senescence by color image analysis for use in a high throughput plant phenotyping pipeline. As high throughput phenotyping platforms are designed to capture whole-of-plant features, camera lenses and camera settings are inappropriate for the capture of fine detail. Specifically, plant colors in images may not represent true plant colors, leading to errors in senescence estimation. Our algorithm features a color distortion correction and image restoration step prior to a senescence analysis. We apply our algorithm to two time series of images of wheat and chickpea plants to quantify the onset and progression of senescence. We compare our results with senescence scores resulting from manual inspection. We demonstrate that our procedure is able to process images in an automated way for an accurate estimation of plant senescence even from color distorted and blurred images obtained under high throughput conditions. PMID:27348807
High-throughput bioinformatics with the Cyrille2 pipeline system
Fiers, Mark WEJ; van der Burgt, Ate; Datema, Erwin; de Groot, Joost CW; van Ham, Roeland CHJ
2008-01-01
Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1) a web based, graphical user interface (GUI) that enables a pipeline operator to manage the system; 2) the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3) the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines. PMID:18269742
A high throughput array microscope for the mechanical characterization of biomaterials
NASA Astrophysics Data System (ADS)
Cribb, Jeremy; Osborne, Lukas D.; Hsiao, Joe Ping-Lin; Vicci, Leandra; Meshram, Alok; O'Brien, E. Tim; Spero, Richard Chasen; Taylor, Russell; Superfine, Richard
2015-02-01
In the last decade, the emergence of high throughput screening has enabled the development of novel drug therapies and elucidated many complex cellular processes. Concurrently, the mechanobiology community has developed tools and methods to show that the dysregulation of biophysical properties and the biochemical mechanisms controlling those properties contribute significantly to many human diseases. Despite these advances, a complete understanding of the connection between biomechanics and disease will require advances in instrumentation that enable parallelized, high throughput assays capable of probing complex signaling pathways, studying biology in physiologically relevant conditions, and capturing specimen and mechanical heterogeneity. Traditional biophysical instruments are unable to meet this need. To address the challenge of large-scale, parallelized biophysical measurements, we have developed an automated array high-throughput microscope system that utilizes passive microbead diffusion to characterize mechanical properties of biomaterials. The instrument is capable of acquiring data on twelve-channels simultaneously, where each channel in the system can independently drive two-channel fluorescence imaging at up to 50 frames per second. We employ this system to measure the concentration-dependent apparent viscosity of hyaluronan, an essential polymer found in connective tissue and whose expression has been implicated in cancer progression.
Lee, Myung Gwon; Shin, Joong Ho; Bae, Chae Yun; Choi, Sungyoung; Park, Je-Kyun
2013-07-02
We report a contraction-expansion array (CEA) microchannel device that performs label-free high-throughput separation of cancer cells from whole blood at low Reynolds number (Re). The CEA microfluidic device utilizes hydrodynamic field effect for cancer cell separation, two kinds of inertial effects: (1) inertial lift force and (2) Dean flow, which results in label-free size-based separation with high throughput. To avoid cell damages potentially caused by high shear stress in conventional inertial separation techniques, the CEA microfluidic device isolates the cells with low operational Re, maintaining high-throughput separation, using nondiluted whole blood samples (hematocrit ~45%). We characterized inertial particle migration and investigated the migration of blood cells and various cancer cells (MCF-7, SK-BR-3, and HCC70) in the CEA microchannel. The separation of cancer cells from whole blood was demonstrated with a cancer cell recovery rate of 99.1%, a blood cell rejection ratio of 88.9%, and a throughput of 1.1 × 10(8) cells/min. In addition, the blood cell rejection ratio was further improved to 97.3% by a two-step filtration process with two devices connected in series.
Wu, Zhenlong; Chen, Yu; Wang, Moran; Chung, Aram J
2016-02-07
Fluid inertia which has conventionally been neglected in microfluidics has been gaining much attention for particle and cell manipulation because inertia-based methods inherently provide simple, passive, precise and high-throughput characteristics. Particularly, the inertial approach has been applied to blood separation for various biomedical research studies mainly using spiral microchannels. For higher throughput, parallelization is essential; however, it is difficult to realize using spiral channels because of their large two dimensional layouts. In this work, we present a novel inertial platform for continuous sheathless particle and blood cell separation in straight microchannels containing microstructures. Microstructures within straight channels exert secondary flows to manipulate particle positions similar to Dean flow in curved channels but with higher controllability. Through a balance between inertial lift force and microstructure-induced secondary flow, we deterministically position microspheres and cells based on their sizes to be separated downstream. Using our inertial platform, we successfully sorted microparticles and fractionized blood cells with high separation efficiencies, high purities and high throughputs. The inertial separation platform developed here can be operated to process diluted blood with a throughput of 10.8 mL min(-1)via radially arrayed single channels with one inlet and two rings of outlets.
Novel Acoustic Loading of a Mass Spectrometer: Toward Next-Generation High-Throughput MS Screening.
Sinclair, Ian; Stearns, Rick; Pringle, Steven; Wingfield, Jonathan; Datwani, Sammy; Hall, Eric; Ghislain, Luke; Majlof, Lars; Bachman, Martin
2016-02-01
High-throughput, direct measurement of substrate-to-product conversion by label-free detection, without the need for engineered substrates or secondary assays, could be considered the "holy grail" of drug discovery screening. Mass spectrometry (MS) has the potential to be part of this ultimate screening solution, but is constrained by the limitations of existing MS sample introduction modes that cannot meet the throughput requirements of high-throughput screening (HTS). Here we report data from a prototype system (Echo-MS) that uses acoustic droplet ejection (ADE) to transfer femtoliter-scale droplets in a rapid, precise, and accurate fashion directly into the MS. The acoustic source can load samples into the MS from a microtiter plate at a rate of up to three samples per second. The resulting MS signal displays a very sharp attack profile and ions are detected within 50 ms of activation of the acoustic transducer. Additionally, we show that the system is capable of generating multiply charged ion species from simple peptides and large proteins. The combination of high speed and low sample volume has significant potential within not only drug discovery, but also other areas of the industry. © 2015 Society for Laboratory Automation and Screening.
THE RABIT: A RAPID AUTOMATED BIODOSIMETRY TOOL FOR RADIOLOGICAL TRIAGE
Garty, Guy; Chen, Youhua; Salerno, Alessio; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Amundson, Sally A.; Brenner, David J.
2010-01-01
In response to the recognized need for high throughput biodosimetry methods for use after large scale radiological events, a logical approach is complete automation of standard biodosimetric assays that are currently performed manually. We describe progress to date on the RABIT (Rapid Automated BIodosimetry Tool), designed to score micronuclei or γ-H2AX fluorescence in lymphocytes derived from a single drop of blood from a fingerstick. The RABIT system is designed to be completely automated, from the input of the capillary blood sample into the machine, to the output of a dose estimate. Improvements in throughput are achieved through use of a single drop of blood, optimization of the biological protocols for in-situ analysis in multi-well plates, implementation of robotic plate and liquid handling, and new developments in high-speed imaging. Automating well-established bioassays represents a promising approach to high-throughput radiation biodosimetry, both because high throughputs can be achieved, but also because the time to deployment is potentially much shorter than for a new biological assay. Here we describe the development of each of the individual modules of the RABIT system, and show preliminary data from key modules. Ongoing is system integration, followed by calibration and validation. PMID:20065685
Heiger-Bernays, Wendy J; Wegner, Susanna; Dix, David J
2018-01-16
The presence of industrial chemicals, consumer product chemicals, and pharmaceuticals is well documented in waters in the U.S. and globally. Most of these chemicals lack health-protective guidelines and many have been shown to have endocrine bioactivity. There is currently no systematic or national prioritization for monitoring waters for chemicals with endocrine disrupting activity. We propose ambient water bioactivity concentrations (AWBCs) generated from high throughput data as a health-based screen for endocrine bioactivity of chemicals in water. The U.S. EPA ToxCast program has screened over 1800 chemicals for estrogen receptor (ER) and androgen receptor (AR) pathway bioactivity. AWBCs are calculated for 110 ER and 212 AR bioactive chemicals using high throughput ToxCast data from in vitro screening assays and predictive pathway models, high-throughput toxicokinetic data, and data-driven assumptions about consumption of water. Chemical-specific AWBCs are compared with measured water concentrations in data sets from the greater Denver area, Minnesota lakes, and Oregon waters, demonstrating a framework for identifying endocrine bioactive chemicals. This approach can be used to screen potential cumulative endocrine activity in drinking water and to inform prioritization of future monitoring, chemical testing and pollution prevention efforts.
Lens-free shadow image based high-throughput continuous cell monitoring technique.
Jin, Geonsoo; Yoo, In-Hwa; Pack, Seung Pil; Yang, Ji-Woon; Ha, Un-Hwan; Paek, Se-Hwan; Seo, Sungkyu
2012-01-01
A high-throughput continuous cell monitoring technique which does not require any labeling reagents or destruction of the specimen is demonstrated. More than 6000 human alveolar epithelial A549 cells are monitored for up to 72 h simultaneously and continuously with a single digital image within a cost and space effective lens-free shadow imaging platform. In an experiment performed within a custom built incubator integrated with the lens-free shadow imaging platform, the cell nucleus division process could be successfully characterized by calculating the signal-to-noise ratios (SNRs) and the shadow diameters (SDs) of the cell shadow patterns. The versatile nature of this platform also enabled a single cell viability test followed by live cell counting. This study firstly shows that the lens-free shadow imaging technique can provide a continuous cell monitoring without any staining/labeling reagent and destruction of the specimen. This high-throughput continuous cell monitoring technique based on lens-free shadow imaging may be widely utilized as a compact, low-cost, and high-throughput cell monitoring tool in the fields of drug and food screening or cell proliferation and viability testing. Copyright © 2012 Elsevier B.V. All rights reserved.
Jia, Kun; Bijeon, Jean Louis; Adam, Pierre Michel; Ionescu, Rodica Elena
2013-02-21
A commercial TEM grid was used as a mask for the creation of extremely well-organized gold micro-/nano-structures on a glass substrate via a high temperature annealing process at 500 °C. The structured substrate was (bio)functionalized and used for the high throughput LSPR immunosensing of different concentrations of a model protein named bovine serum albumin.
Ufarté, Lisa; Bozonnet, Sophie; Laville, Elisabeth; Cecchini, Davide A; Pizzut-Serin, Sandra; Jacquiod, Samuel; Demanèche, Sandrine; Simonet, Pascal; Franqueville, Laure; Veronese, Gabrielle Potocki
2016-01-01
Activity-based metagenomics is one of the most efficient approaches to boost the discovery of novel biocatalysts from the huge reservoir of uncultivated bacteria. In this chapter, we describe a highly generic procedure of metagenomic library construction and high-throughput screening for carbohydrate-active enzymes. Applicable to any bacterial ecosystem, it enables the swift identification of functional enzymes that are highly efficient, alone or acting in synergy, to break down polysaccharides and oligosaccharides.
Low-dose fixed-target serial synchrotron crystallography.
Owen, Robin L; Axford, Danny; Sherrell, Darren A; Kuo, Anling; Ernst, Oliver P; Schulz, Eike C; Miller, R J Dwayne; Mueller-Werkmeister, Henrike M
2017-04-01
The development of serial crystallography has been driven by the sample requirements imposed by X-ray free-electron lasers. Serial techniques are now being exploited at synchrotrons. Using a fixed-target approach to high-throughput serial sampling, it is demonstrated that high-quality data can be collected from myoglobin crystals, allowing room-temperature, low-dose structure determination. The combination of fixed-target arrays and a fast, accurate translation system allows high-throughput serial data collection at high hit rates and with low sample consumption.
Quality control methodology for high-throughput protein-protein interaction screening.
Vazquez, Alexei; Rual, Jean-François; Venkatesan, Kavitha
2011-01-01
Protein-protein interactions are key to many aspects of the cell, including its cytoskeletal structure, the signaling processes in which it is involved, or its metabolism. Failure to form protein complexes or signaling cascades may sometimes translate into pathologic conditions such as cancer or neurodegenerative diseases. The set of all protein interactions between the proteins encoded by an organism constitutes its protein interaction network, representing a scaffold for biological function. Knowing the protein interaction network of an organism, combined with other sources of biological information, can unravel fundamental biological circuits and may help better understand the molecular basics of human diseases. The protein interaction network of an organism can be mapped by combining data obtained from both low-throughput screens, i.e., "one gene at a time" experiments and high-throughput screens, i.e., screens designed to interrogate large sets of proteins at once. In either case, quality controls are required to deal with the inherent imperfect nature of experimental assays. In this chapter, we discuss experimental and statistical methodologies to quantify error rates in high-throughput protein-protein interactions screens.
Spitzer, James D; Hupert, Nathaniel; Duckart, Jonathan; Xiong, Wei
2007-01-01
Community-based mass prophylaxis is a core public health operational competency, but staffing needs may overwhelm the local trained health workforce. Just-in-time (JIT) training of emergency staff and computer modeling of workforce requirements represent two complementary approaches to address this logistical problem. Multnomah County, Oregon, conducted a high-throughput point of dispensing (POD) exercise to test JIT training and computer modeling to validate POD staffing estimates. The POD had 84% non-health-care worker staff and processed 500 patients per hour. Post-exercise modeling replicated observed staff utilization levels and queue formation, including development and amelioration of a large medical evaluation queue caused by lengthy processing times and understaffing in the first half-hour of the exercise. The exercise confirmed the feasibility of using JIT training for high-throughput antibiotic dispensing clinics staffed largely by nonmedical professionals. Patient processing times varied over the course of the exercise, with important implications for both staff reallocation and future POD modeling efforts. Overall underutilization of staff revealed the opportunity for greater efficiencies and even higher future throughputs.
Huang, Dejian; Ou, Boxin; Hampsch-Woodill, Maureen; Flanagan, Judith A; Prior, Ronald L
2002-07-31
The oxygen radical absorbance capacity (ORAC) assay has been widely accepted as a standard tool to measure the antioxidant activity in the nutraceutical, pharmaceutical, and food industries. However, the ORAC assay has been criticized for a lack of accessibility due to the unavailability of the COBAS FARA II analyzer, an instrument discontinued by the manufacturer. In addition, the manual sample preparation is time-consuming and labor-intensive. The objective of this study was to develop a high-throughput instrument platform that can fully automate the ORAC assay procedure. The new instrument platform consists of a robotic eight-channel liquid handling system and a microplate fluorescence reader. By using the high-throughput platform, the efficiency of the assay is improved with at least a 10-fold increase in sample throughput over the current procedure. The mean of intra- and interday CVs was
Microreactor Cells for High-Throughput X-ray Absorption Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beesley, Angela; Tsapatsaris, Nikolaos; Weiher, Norbert
2007-01-19
High-throughput experimentation has been applied to X-ray Absorption spectroscopy as a novel route for increasing research productivity in the catalysis community. Suitable instrumentation has been developed for the rapid determination of the local structure in the metal component of precursors for supported catalysts. An automated analytical workflow was implemented that is much faster than traditional individual spectrum analysis. It allows the generation of structural data in quasi-real time. We describe initial results obtained from the automated high throughput (HT) data reduction and analysis of a sample library implemented through the 96 well-plate industrial standard. The results show that a fullymore » automated HT-XAS technology based on existing industry standards is feasible and useful for the rapid elucidation of geometric and electronic structure of materials.« less
Edwards, Bonnie; Lesnick, John; Wang, Jing; Tang, Nga; Peters, Carl
2016-02-01
Epigenetics continues to emerge as an important target class for drug discovery and cancer research. As programs scale to evaluate many new targets related to epigenetic expression, new tools and techniques are required to enable efficient and reproducible high-throughput epigenetic screening. Assay miniaturization increases screening throughput and reduces operating costs. Echo liquid handlers can transfer compounds, samples, reagents, and beads in submicroliter volumes to high-density assay formats using only acoustic energy-no contact or tips required. This eliminates tip costs and reduces the risk of reagent carryover. In this study, we demonstrate the miniaturization of a methyltransferase assay using Echo liquid handlers and two different assay technologies: AlphaLISA from PerkinElmer and EPIgeneous HTRF from Cisbio. © 2015 Society for Laboratory Automation and Screening.
Fluorescence-based high-throughput screening of dicer cleavage activity.
Podolska, Katerina; Sedlak, David; Bartunek, Petr; Svoboda, Petr
2014-03-01
Production of small RNAs by ribonuclease III Dicer is a key step in microRNA and RNA interference pathways, which employ Dicer-produced small RNAs as sequence-specific silencing guides. Further studies and manipulations of microRNA and RNA interference pathways would benefit from identification of small-molecule modulators. Here, we report a study of a fluorescence-based in vitro Dicer cleavage assay, which was adapted for high-throughput screening. The kinetic assay can be performed under single-turnover conditions (35 nM substrate and 70 nM Dicer) in a small volume (5 µL), which makes it suitable for high-throughput screening in a 1536-well format. As a proof of principle, a small library of bioactive compounds was analyzed, demonstrating potential of the assay.
Tome, Jacob M; Ozer, Abdullah; Pagano, John M; Gheba, Dan; Schroth, Gary P; Lis, John T
2014-06-01
RNA-protein interactions play critical roles in gene regulation, but methods to quantitatively analyze these interactions at a large scale are lacking. We have developed a high-throughput sequencing-RNA affinity profiling (HiTS-RAP) assay by adapting a high-throughput DNA sequencer to quantify the binding of fluorescently labeled protein to millions of RNAs anchored to sequenced cDNA templates. Using HiTS-RAP, we measured the affinity of mutagenized libraries of GFP-binding and NELF-E-binding aptamers to their respective targets and identified critical regions of interaction. Mutations additively affected the affinity of the NELF-E-binding aptamer, whose interaction depended mainly on a single-stranded RNA motif, but not that of the GFP aptamer, whose interaction depended primarily on secondary structure.
Kokel, David; Rennekamp, Andrew J; Shah, Asmi H; Liebel, Urban; Peterson, Randall T
2012-08-01
For decades, studying the behavioral effects of individual drugs and genetic mutations has been at the heart of efforts to understand and treat nervous system disorders. High-throughput technologies adapted from other disciplines (e.g., high-throughput chemical screening, genomics) are changing the scale of data acquisition in behavioral neuroscience. Massive behavioral datasets are beginning to emerge, particularly from zebrafish labs, where behavioral assays can be performed rapidly and reproducibly in 96-well, high-throughput format. Mining these datasets and making comparisons across different assays are major challenges for the field. Here, we review behavioral barcoding, a process by which complex behavioral assays are reduced to a string of numeric features, facilitating analysis and comparison within and across datasets. Copyright © 2012 Elsevier Ltd. All rights reserved.